Test Report: Docker_Linux_crio_arm64 21997

                    
                      ee66eb73e5650a3c34c21fac75605dac5b258565:2025-12-02:42611
                    
                

Test fail (57/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.36
44 TestAddons/parallel/Registry 15.97
45 TestAddons/parallel/RegistryCreds 0.5
46 TestAddons/parallel/Ingress 143.13
47 TestAddons/parallel/InspektorGadget 6.32
48 TestAddons/parallel/MetricsServer 5.42
50 TestAddons/parallel/CSI 47.21
51 TestAddons/parallel/Headlamp 3.57
52 TestAddons/parallel/CloudSpanner 6.31
53 TestAddons/parallel/LocalPath 13.52
54 TestAddons/parallel/NvidiaDevicePlugin 6.3
55 TestAddons/parallel/Yakd 6.26
106 TestFunctional/parallel/ServiceCmdConnect 603.45
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.33
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
155 TestFunctional/parallel/ServiceCmd/DeployApp 600.85
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
162 TestFunctional/parallel/ServiceCmd/Format 0.52
163 TestFunctional/parallel/ServiceCmd/URL 0.69
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 508.51
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.47
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.33
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.44
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.38
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.55
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.74
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 2.42
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.27
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.68
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 2.34
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.14
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 117.07
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 0.89
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.89
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.3
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.21
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.63
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.25
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.28
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
293 TestJSONOutput/pause/Command 1.78
299 TestJSONOutput/unpause/Command 2.23
358 TestKubernetesUpgrade 802.23
384 TestPause/serial/Pause 7.58
442 TestStartStop/group/newest-cni/serial/SecondStart 7200.072
x
+
TestAddons/serial/Volcano (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable volcano --alsologtostderr -v=1: exit status 11 (357.858846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:11:17.691866  454119 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:11:17.692819  454119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:17.692856  454119 out.go:374] Setting ErrFile to fd 2...
	I1202 21:11:17.692868  454119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:17.693295  454119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:11:17.693685  454119 mustload.go:66] Loading cluster: addons-656754
	I1202 21:11:17.694153  454119 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:17.694175  454119 addons.go:622] checking whether the cluster is paused
	I1202 21:11:17.694315  454119 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:17.694331  454119 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:11:17.694924  454119 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:11:17.733253  454119 ssh_runner.go:195] Run: systemctl --version
	I1202 21:11:17.733317  454119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:11:17.751310  454119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:11:17.875269  454119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:11:17.875367  454119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:11:17.912767  454119 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:11:17.912796  454119 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:11:17.912801  454119 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:11:17.912806  454119 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:11:17.912809  454119 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:11:17.912813  454119 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:11:17.912815  454119 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:11:17.912819  454119 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:11:17.912821  454119 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:11:17.912827  454119 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:11:17.912830  454119 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:11:17.912833  454119 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:11:17.912836  454119 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:11:17.912840  454119 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:11:17.912843  454119 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:11:17.912848  454119 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:11:17.912855  454119 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:11:17.912859  454119 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:11:17.912862  454119 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:11:17.912865  454119 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:11:17.912870  454119 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:11:17.912875  454119 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:11:17.912878  454119 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:11:17.912882  454119 cri.go:89] found id: ""
	I1202 21:11:17.912933  454119 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:11:17.941579  454119 out.go:203] 
	W1202 21:11:17.944455  454119 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:11:17.944481  454119 out.go:285] * 
	* 
	W1202 21:11:17.950038  454119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:11:17.953183  454119 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.052624ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003420501s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003859876s
addons_test.go:392: (dbg) Run:  kubectl --context addons-656754 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-656754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-656754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.325711342s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable registry --alsologtostderr -v=1: exit status 11 (361.063022ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:11:44.893272  454589 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:11:44.894069  454589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:44.894084  454589 out.go:374] Setting ErrFile to fd 2...
	I1202 21:11:44.894090  454589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:44.894373  454589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:11:44.894674  454589 mustload.go:66] Loading cluster: addons-656754
	I1202 21:11:44.895096  454589 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:44.895116  454589 addons.go:622] checking whether the cluster is paused
	I1202 21:11:44.895229  454589 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:44.895245  454589 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:11:44.895762  454589 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:11:44.919923  454589 ssh_runner.go:195] Run: systemctl --version
	I1202 21:11:44.919989  454589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:11:44.938479  454589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:11:45.048113  454589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:11:45.048237  454589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:11:45.121840  454589 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:11:45.121873  454589 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:11:45.121879  454589 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:11:45.121883  454589 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:11:45.121887  454589 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:11:45.121892  454589 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:11:45.121895  454589 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:11:45.121899  454589 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:11:45.121904  454589 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:11:45.121911  454589 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:11:45.121915  454589 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:11:45.121918  454589 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:11:45.121933  454589 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:11:45.121937  454589 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:11:45.121941  454589 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:11:45.121951  454589 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:11:45.121955  454589 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:11:45.121962  454589 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:11:45.121966  454589 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:11:45.121969  454589 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:11:45.121975  454589 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:11:45.121978  454589 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:11:45.121981  454589 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:11:45.121985  454589 cri.go:89] found id: ""
	I1202 21:11:45.122384  454589 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:11:45.167427  454589 out.go:203] 
	W1202 21:11:45.173968  454589 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:11:45.174114  454589 out.go:285] * 
	* 
	W1202 21:11:45.191288  454589 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:11:45.196410  454589 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.97s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.603567ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-656754
addons_test.go:332: (dbg) Run:  kubectl --context addons-656754 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (264.165341ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:26.930318  456581 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:26.931127  456581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:26.931184  456581 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:26.931211  456581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:26.931541  456581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:26.931968  456581 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:26.932529  456581 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:26.932577  456581 addons.go:622] checking whether the cluster is paused
	I1202 21:12:26.932735  456581 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:26.932788  456581 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:26.933449  456581 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:26.951658  456581 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:26.951720  456581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:26.973523  456581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:27.077818  456581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:27.077912  456581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:27.106319  456581 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:27.106342  456581 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:27.106360  456581 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:27.106381  456581 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:27.106392  456581 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:27.106397  456581 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:27.106400  456581 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:27.106403  456581 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:27.106407  456581 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:27.106413  456581 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:27.106421  456581 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:27.106424  456581 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:27.106443  456581 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:27.106453  456581 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:27.106457  456581 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:27.106461  456581 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:27.106464  456581 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:27.106468  456581 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:27.106480  456581 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:27.106485  456581 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:27.106491  456581 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:27.106498  456581 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:27.106501  456581 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:27.106504  456581 cri.go:89] found id: ""
	I1202 21:12:27.106574  456581 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:27.125472  456581 out.go:203] 
	W1202 21:12:27.128546  456581 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:27.128570  456581 out.go:285] * 
	* 
	W1202 21:12:27.134171  456581 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:27.137129  456581 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-656754 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-656754 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-656754 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [62749eab-4ff3-45e8-a17e-6a16874db937] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [62749eab-4ff3-45e8-a17e-6a16874db937] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003953411s
I1202 21:12:31.329830  447211 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.415257064s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-656754 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-656754
helpers_test.go:243: (dbg) docker inspect addons-656754:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036",
	        "Created": "2025-12-02T21:08:59.231811527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448603,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:08:59.296791297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/hostname",
	        "HostsPath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/hosts",
	        "LogPath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036-json.log",
	        "Name": "/addons-656754",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-656754:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-656754",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036",
	                "LowerDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-656754",
	                "Source": "/var/lib/docker/volumes/addons-656754/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-656754",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-656754",
	                "name.minikube.sigs.k8s.io": "addons-656754",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9edec1bfac1be8f8d951bd8d9f55267a5f117dbd28895252fdd0ac72ca0282e",
	            "SandboxKey": "/var/run/docker/netns/c9edec1bfac1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-656754": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:58:d4:c4:78:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99d20b9fca2e0f43d68a83eb1455218fde6d1486f2da0b1dcae3ebb9594c9f46",
	                    "EndpointID": "01b0167302bc7a99382dc25c557580a0a0d8b63c67c67e397ade2e5624404f71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-656754",
	                        "efe0c78f1497"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-656754 -n addons-656754
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-656754 logs -n 25: (1.458525661s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-798204                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-798204 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ --download-only -p binary-mirror-045307 --alsologtostderr --binary-mirror http://127.0.0.1:40293 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045307   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ -p binary-mirror-045307                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-045307   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ addons  │ enable dashboard -p addons-656754                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-656754                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ start   │ -p addons-656754 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:11 UTC │
	│ addons  │ addons-656754 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ addons  │ addons-656754 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ addons  │ addons-656754 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ ip      │ addons-656754 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │ 02 Dec 25 21:11 UTC │
	│ addons  │ addons-656754 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ addons  │ addons-656754 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ ssh     │ addons-656754 ssh cat /opt/local-path-provisioner/pvc-3df1e97b-8903-4317-b848-7da6166c304a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │ 02 Dec 25 21:12 UTC │
	│ addons  │ addons-656754 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-656754 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-656754                                                                                                                                                                                                                                                                                                                                                                                           │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │ 02 Dec 25 21:12 UTC │
	│ addons  │ addons-656754 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ ssh     │ addons-656754 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ ip      │ addons-656754 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:14 UTC │ 02 Dec 25 21:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:08:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:08:53.266224  448211 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:08:53.266434  448211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:53.266461  448211 out.go:374] Setting ErrFile to fd 2...
	I1202 21:08:53.266483  448211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:53.267078  448211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:08:53.267555  448211 out.go:368] Setting JSON to false
	I1202 21:08:53.268387  448211 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10262,"bootTime":1764699472,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:08:53.268457  448211 start.go:143] virtualization:  
	I1202 21:08:53.271683  448211 out.go:179] * [addons-656754] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:08:53.275598  448211 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:08:53.275740  448211 notify.go:221] Checking for updates...
	I1202 21:08:53.281520  448211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:08:53.284498  448211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:08:53.287365  448211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:08:53.290198  448211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:08:53.293096  448211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:08:53.296256  448211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:08:53.324844  448211 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:08:53.324964  448211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:53.386135  448211 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:08:53.37684628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:53.386244  448211 docker.go:319] overlay module found
	I1202 21:08:53.390919  448211 out.go:179] * Using the docker driver based on user configuration
	I1202 21:08:53.393696  448211 start.go:309] selected driver: docker
	I1202 21:08:53.393715  448211 start.go:927] validating driver "docker" against <nil>
	I1202 21:08:53.393728  448211 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:08:53.394454  448211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:53.447649  448211 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:08:53.438130105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:53.447802  448211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:08:53.448068  448211 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:08:53.451056  448211 out.go:179] * Using Docker driver with root privileges
	I1202 21:08:53.453864  448211 cni.go:84] Creating CNI manager for ""
	I1202 21:08:53.453934  448211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:08:53.453948  448211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 21:08:53.454025  448211 start.go:353] cluster config:
	{Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 21:08:53.457104  448211 out.go:179] * Starting "addons-656754" primary control-plane node in "addons-656754" cluster
	I1202 21:08:53.459832  448211 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:08:53.462701  448211 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:08:53.465618  448211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:08:53.465663  448211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 21:08:53.465676  448211 cache.go:65] Caching tarball of preloaded images
	I1202 21:08:53.465687  448211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:08:53.465759  448211 preload.go:238] Found /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 21:08:53.465769  448211 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 21:08:53.466097  448211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/config.json ...
	I1202 21:08:53.466117  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/config.json: {Name:mka7b54be10a861bfb995eaef2daf2bf1910d7e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:08:53.484454  448211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:08:53.484475  448211 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 21:08:53.484494  448211 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:08:53.484555  448211 start.go:360] acquireMachinesLock for addons-656754: {Name:mk3a37f4628ff59aab4458c86531034220273f2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:08:53.484657  448211 start.go:364] duration metric: took 80.887µs to acquireMachinesLock for "addons-656754"
	I1202 21:08:53.484691  448211 start.go:93] Provisioning new machine with config: &{Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:08:53.484759  448211 start.go:125] createHost starting for "" (driver="docker")
	I1202 21:08:53.488112  448211 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 21:08:53.488338  448211 start.go:159] libmachine.API.Create for "addons-656754" (driver="docker")
	I1202 21:08:53.488367  448211 client.go:173] LocalClient.Create starting
	I1202 21:08:53.488477  448211 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem
	I1202 21:08:53.722163  448211 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem
	I1202 21:08:53.837794  448211 cli_runner.go:164] Run: docker network inspect addons-656754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 21:08:53.853895  448211 cli_runner.go:211] docker network inspect addons-656754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 21:08:53.853991  448211 network_create.go:284] running [docker network inspect addons-656754] to gather additional debugging logs...
	I1202 21:08:53.854011  448211 cli_runner.go:164] Run: docker network inspect addons-656754
	W1202 21:08:53.870292  448211 cli_runner.go:211] docker network inspect addons-656754 returned with exit code 1
	I1202 21:08:53.870322  448211 network_create.go:287] error running [docker network inspect addons-656754]: docker network inspect addons-656754: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-656754 not found
	I1202 21:08:53.870335  448211 network_create.go:289] output of [docker network inspect addons-656754]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-656754 not found
	
	** /stderr **
	I1202 21:08:53.870435  448211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:08:53.886760  448211 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b1550}
	I1202 21:08:53.886800  448211 network_create.go:124] attempt to create docker network addons-656754 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 21:08:53.886855  448211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-656754 addons-656754
	I1202 21:08:53.944633  448211 network_create.go:108] docker network addons-656754 192.168.49.0/24 created
	I1202 21:08:53.944662  448211 kic.go:121] calculated static IP "192.168.49.2" for the "addons-656754" container
	I1202 21:08:53.944750  448211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 21:08:53.960453  448211 cli_runner.go:164] Run: docker volume create addons-656754 --label name.minikube.sigs.k8s.io=addons-656754 --label created_by.minikube.sigs.k8s.io=true
	I1202 21:08:53.978468  448211 oci.go:103] Successfully created a docker volume addons-656754
	I1202 21:08:53.978554  448211 cli_runner.go:164] Run: docker run --rm --name addons-656754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-656754 --entrypoint /usr/bin/test -v addons-656754:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 21:08:55.170613  448211 cli_runner.go:217] Completed: docker run --rm --name addons-656754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-656754 --entrypoint /usr/bin/test -v addons-656754:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.192018358s)
	I1202 21:08:55.170643  448211 oci.go:107] Successfully prepared a docker volume addons-656754
	I1202 21:08:55.170694  448211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:08:55.170709  448211 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 21:08:55.170783  448211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-656754:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 21:08:59.165951  448211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-656754:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.995125032s)
	I1202 21:08:59.165981  448211 kic.go:203] duration metric: took 3.995268089s to extract preloaded images to volume ...
	W1202 21:08:59.166125  448211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 21:08:59.166225  448211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 21:08:59.217551  448211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-656754 --name addons-656754 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-656754 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-656754 --network addons-656754 --ip 192.168.49.2 --volume addons-656754:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 21:08:59.505717  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Running}}
	I1202 21:08:59.531161  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:08:59.567199  448211 cli_runner.go:164] Run: docker exec addons-656754 stat /var/lib/dpkg/alternatives/iptables
	I1202 21:08:59.626088  448211 oci.go:144] the created container "addons-656754" has a running status.
	I1202 21:08:59.626116  448211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa...
	I1202 21:09:00.328370  448211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 21:09:00.364489  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:00.397099  448211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 21:09:00.397136  448211 kic_runner.go:114] Args: [docker exec --privileged addons-656754 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 21:09:00.465850  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:00.488381  448211 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:00.488497  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:00.509221  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:00.509608  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:00.509627  448211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:00.675150  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-656754
	
	I1202 21:09:00.675177  448211 ubuntu.go:182] provisioning hostname "addons-656754"
	I1202 21:09:00.675251  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:00.698942  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:00.699292  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:00.699317  448211 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-656754 && echo "addons-656754" | sudo tee /etc/hostname
	I1202 21:09:00.868764  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-656754
	
	I1202 21:09:00.868906  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:00.886124  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:00.886446  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:00.886462  448211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-656754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-656754/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-656754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:01.039485  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:01.039531  448211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:09:01.039552  448211 ubuntu.go:190] setting up certificates
	I1202 21:09:01.039566  448211 provision.go:84] configureAuth start
	I1202 21:09:01.039638  448211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-656754
	I1202 21:09:01.058348  448211 provision.go:143] copyHostCerts
	I1202 21:09:01.058435  448211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:09:01.058571  448211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:09:01.058647  448211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:09:01.058709  448211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.addons-656754 san=[127.0.0.1 192.168.49.2 addons-656754 localhost minikube]
	I1202 21:09:01.260946  448211 provision.go:177] copyRemoteCerts
	I1202 21:09:01.261073  448211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:01.261117  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:01.279268  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:01.383248  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:09:01.401849  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 21:09:01.420992  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:01.439469  448211 provision.go:87] duration metric: took 399.879692ms to configureAuth
	I1202 21:09:01.439542  448211 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:01.439771  448211 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:09:01.439893  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:01.457531  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:01.457852  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:01.457873  448211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:09:01.971440  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:09:01.971461  448211 machine.go:97] duration metric: took 1.48305281s to provisionDockerMachine
	I1202 21:09:01.971472  448211 client.go:176] duration metric: took 8.483099172s to LocalClient.Create
	I1202 21:09:01.971483  448211 start.go:167] duration metric: took 8.483147707s to libmachine.API.Create "addons-656754"
	I1202 21:09:01.971490  448211 start.go:293] postStartSetup for "addons-656754" (driver="docker")
	I1202 21:09:01.971500  448211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:01.971561  448211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:01.971599  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:01.990558  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.096212  448211 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:02.099721  448211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:02.099748  448211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:02.099760  448211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:09:02.099833  448211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:09:02.099854  448211 start.go:296] duration metric: took 128.35855ms for postStartSetup
	I1202 21:09:02.100202  448211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-656754
	I1202 21:09:02.119624  448211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/config.json ...
	I1202 21:09:02.119946  448211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:02.119998  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:02.139107  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.240356  448211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:02.244966  448211 start.go:128] duration metric: took 8.760184934s to createHost
	I1202 21:09:02.245044  448211 start.go:83] releasing machines lock for "addons-656754", held for 8.760370586s
	I1202 21:09:02.245137  448211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-656754
	I1202 21:09:02.262447  448211 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:02.262545  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:02.262809  448211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:02.262862  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:02.284303  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.296507  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.481807  448211 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:02.488179  448211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:09:02.538141  448211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:09:02.542549  448211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:02.542629  448211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:02.573398  448211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 21:09:02.573425  448211 start.go:496] detecting cgroup driver to use...
	I1202 21:09:02.573460  448211 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:02.573515  448211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:09:02.592850  448211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:09:02.605566  448211 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:02.605678  448211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:02.624011  448211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:02.642921  448211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:02.771477  448211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:02.894856  448211 docker.go:234] disabling docker service ...
	I1202 21:09:02.894964  448211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:02.916596  448211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:02.930037  448211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:03.058442  448211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:03.187713  448211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:03.200186  448211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:03.213557  448211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:09:03.213625  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.222603  448211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:09:03.222675  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.231643  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.240257  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.248938  448211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:03.257532  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.266305  448211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.279725  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.288468  448211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:03.295721  448211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:03.302783  448211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:03.421682  448211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:09:03.610102  448211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:09:03.610187  448211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:09:03.613933  448211 start.go:564] Will wait 60s for crictl version
	I1202 21:09:03.614000  448211 ssh_runner.go:195] Run: which crictl
	I1202 21:09:03.617306  448211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:03.652589  448211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:09:03.652773  448211 ssh_runner.go:195] Run: crio --version
	I1202 21:09:03.683240  448211 ssh_runner.go:195] Run: crio --version
	I1202 21:09:03.719005  448211 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 21:09:03.721942  448211 cli_runner.go:164] Run: docker network inspect addons-656754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:03.738544  448211 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:03.742661  448211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:09:03.753173  448211 kubeadm.go:884] updating cluster {Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:03.753302  448211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:09:03.753364  448211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:03.797799  448211 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:09:03.797824  448211 crio.go:433] Images already preloaded, skipping extraction
	I1202 21:09:03.797889  448211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:03.823645  448211 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:09:03.823670  448211 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:03.823679  448211 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 21:09:03.823821  448211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-656754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:03.823912  448211 ssh_runner.go:195] Run: crio config
	I1202 21:09:03.888535  448211 cni.go:84] Creating CNI manager for ""
	I1202 21:09:03.888558  448211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:09:03.888575  448211 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:03.888598  448211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-656754 NodeName:addons-656754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:03.888730  448211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-656754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:03.888808  448211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 21:09:03.896420  448211 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:03.896498  448211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:03.904075  448211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 21:09:03.916869  448211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 21:09:03.929345  448211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1202 21:09:03.941919  448211 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:03.945617  448211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:09:03.955700  448211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:04.098960  448211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:04.116114  448211 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754 for IP: 192.168.49.2
	I1202 21:09:04.116178  448211 certs.go:195] generating shared ca certs ...
	I1202 21:09:04.116208  448211 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:04.116388  448211 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:09:04.298499  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt ...
	I1202 21:09:04.298532  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt: {Name:mkb7268e5d2cf4e490ec2757b1e751cce88ddc08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:04.298760  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key ...
	I1202 21:09:04.298771  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key: {Name:mkdde83518864eb9b1cff6e81c6693452a945a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:04.298852  448211 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:09:05.180432  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt ...
	I1202 21:09:05.180466  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt: {Name:mke28285c3a28f9ad2afd40d9b0e756b7a14c822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.180663  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key ...
	I1202 21:09:05.180676  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key: {Name:mkd75d5c61a930c130a6a239e8592d110d7f3480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.180758  448211 certs.go:257] generating profile certs ...
	I1202 21:09:05.180821  448211 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.key
	I1202 21:09:05.180838  448211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt with IP's: []
	I1202 21:09:05.230623  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt ...
	I1202 21:09:05.230648  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: {Name:mk0f70759a7c70fb2a447382a2388f55fc38c755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.230824  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.key ...
	I1202 21:09:05.230838  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.key: {Name:mkd5ace339531361dfdc33e0f946bf26b87c6257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.230949  448211 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521
	I1202 21:09:05.230972  448211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 21:09:05.512469  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521 ...
	I1202 21:09:05.512499  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521: {Name:mk4c8cd0b801465a2237024ca94662ba57997484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.512678  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521 ...
	I1202 21:09:05.512693  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521: {Name:mke22ae8582c1300fd8908bc71c19cd6e64f6576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.512774  448211 certs.go:382] copying /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521 -> /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt
	I1202 21:09:05.512852  448211 certs.go:386] copying /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521 -> /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key
	I1202 21:09:05.512905  448211 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key
	I1202 21:09:05.512925  448211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt with IP's: []
	I1202 21:09:05.871101  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt ...
	I1202 21:09:05.871133  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt: {Name:mka961017afb64a240b7bdf35c1f056407603063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.871315  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key ...
	I1202 21:09:05.871329  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key: {Name:mk736075cf81bf75740f699e84f8edbb27af1c62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.871514  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:05.871559  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:09:05.871589  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:05.871620  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:05.872184  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:05.891801  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:05.910820  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:05.929216  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:09:05.946956  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 21:09:05.965801  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 21:09:05.984076  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:06.002512  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:09:06.027565  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:06.047684  448211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:06.062495  448211 ssh_runner.go:195] Run: openssl version
	I1202 21:09:06.069246  448211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:06.078339  448211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:06.082432  448211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:06.082522  448211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:06.124250  448211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:06.133155  448211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:06.136937  448211 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 21:09:06.136988  448211 kubeadm.go:401] StartCluster: {Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.137072  448211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:06.137139  448211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:06.167415  448211 cri.go:89] found id: ""
	I1202 21:09:06.167489  448211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:06.175532  448211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:09:06.183575  448211 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:09:06.183663  448211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:09:06.191890  448211 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:09:06.191911  448211 kubeadm.go:158] found existing configuration files:
	
	I1202 21:09:06.191965  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 21:09:06.199779  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:09:06.199867  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:09:06.207332  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 21:09:06.215111  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:09:06.215180  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:09:06.222888  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 21:09:06.230977  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:09:06.231081  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:09:06.238621  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 21:09:06.246390  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:09:06.246483  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:09:06.253820  448211 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:09:06.293541  448211 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 21:09:06.293604  448211 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:09:06.318494  448211 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:09:06.318571  448211 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:09:06.318612  448211 kubeadm.go:319] OS: Linux
	I1202 21:09:06.318662  448211 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:09:06.318714  448211 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:09:06.318765  448211 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:09:06.318816  448211 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:09:06.318866  448211 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:09:06.318918  448211 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:09:06.318968  448211 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:09:06.319035  448211 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:09:06.319087  448211 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:09:06.395889  448211 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:09:06.396004  448211 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:09:06.396125  448211 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:09:06.406396  448211 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:09:06.413232  448211 out.go:252]   - Generating certificates and keys ...
	I1202 21:09:06.413331  448211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:09:06.413404  448211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:09:06.617290  448211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 21:09:07.128480  448211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 21:09:07.324242  448211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 21:09:07.670014  448211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 21:09:08.370628  448211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 21:09:08.370971  448211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-656754 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:09:08.470089  448211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 21:09:08.470535  448211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-656754 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:09:08.776289  448211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 21:09:09.028556  448211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 21:09:09.195953  448211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 21:09:09.196200  448211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:09:09.843871  448211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:09:10.317059  448211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:09:10.645220  448211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:09:10.760418  448211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:09:11.321988  448211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:09:11.322839  448211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:09:11.325787  448211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:09:11.331066  448211 out.go:252]   - Booting up control plane ...
	I1202 21:09:11.331177  448211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:09:11.331269  448211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:09:11.331344  448211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:09:11.345904  448211 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:09:11.346186  448211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:09:11.354727  448211 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:09:11.354831  448211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:09:11.354878  448211 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:09:11.491535  448211 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:09:11.491661  448211 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:09:13.489284  448211 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001018408s
	I1202 21:09:13.493007  448211 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 21:09:13.493111  448211 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 21:09:13.493221  448211 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 21:09:13.493332  448211 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 21:09:16.854768  448211 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.36105387s
	I1202 21:09:18.080127  448211 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.587080452s
	I1202 21:09:19.994798  448211 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501659107s
	I1202 21:09:20.039672  448211 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 21:09:20.058371  448211 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 21:09:20.075296  448211 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 21:09:20.075558  448211 kubeadm.go:319] [mark-control-plane] Marking the node addons-656754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 21:09:20.092888  448211 kubeadm.go:319] [bootstrap-token] Using token: s833ce.4fiprx753etcuhgl
	I1202 21:09:20.095752  448211 out.go:252]   - Configuring RBAC rules ...
	I1202 21:09:20.095884  448211 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 21:09:20.103046  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 21:09:20.116377  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 21:09:20.122187  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 21:09:20.128505  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 21:09:20.133106  448211 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 21:09:20.402878  448211 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 21:09:20.848860  448211 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 21:09:21.402130  448211 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 21:09:21.403324  448211 kubeadm.go:319] 
	I1202 21:09:21.403405  448211 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 21:09:21.403415  448211 kubeadm.go:319] 
	I1202 21:09:21.403492  448211 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 21:09:21.403501  448211 kubeadm.go:319] 
	I1202 21:09:21.403526  448211 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 21:09:21.403588  448211 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 21:09:21.403644  448211 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 21:09:21.403653  448211 kubeadm.go:319] 
	I1202 21:09:21.403707  448211 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 21:09:21.403715  448211 kubeadm.go:319] 
	I1202 21:09:21.403762  448211 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 21:09:21.403768  448211 kubeadm.go:319] 
	I1202 21:09:21.403820  448211 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 21:09:21.403898  448211 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 21:09:21.403970  448211 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 21:09:21.403978  448211 kubeadm.go:319] 
	I1202 21:09:21.404080  448211 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 21:09:21.404160  448211 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 21:09:21.404166  448211 kubeadm.go:319] 
	I1202 21:09:21.404251  448211 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s833ce.4fiprx753etcuhgl \
	I1202 21:09:21.404357  448211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d4cda52a6893d5340cae35e7f1bec4a8a826aaefc3b1aeca8da4a9d2d90cc2f0 \
	I1202 21:09:21.404381  448211 kubeadm.go:319] 	--control-plane 
	I1202 21:09:21.404389  448211 kubeadm.go:319] 
	I1202 21:09:21.404474  448211 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 21:09:21.404481  448211 kubeadm.go:319] 
	I1202 21:09:21.404564  448211 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s833ce.4fiprx753etcuhgl \
	I1202 21:09:21.404675  448211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d4cda52a6893d5340cae35e7f1bec4a8a826aaefc3b1aeca8da4a9d2d90cc2f0 
	I1202 21:09:21.407353  448211 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1202 21:09:21.407580  448211 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:09:21.407689  448211 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:09:21.407709  448211 cni.go:84] Creating CNI manager for ""
	I1202 21:09:21.407717  448211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:09:21.412778  448211 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 21:09:21.415607  448211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 21:09:21.419590  448211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 21:09:21.419607  448211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 21:09:21.434204  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 21:09:21.750342  448211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 21:09:21.750502  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:21.750587  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-656754 minikube.k8s.io/updated_at=2025_12_02T21_09_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=addons-656754 minikube.k8s.io/primary=true
	I1202 21:09:21.929166  448211 ops.go:34] apiserver oom_adj: -16
	I1202 21:09:21.929273  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:22.430338  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:22.929392  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:23.430248  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:23.929946  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:24.429383  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:24.930075  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:25.429547  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:25.930067  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:26.041868  448211 kubeadm.go:1114] duration metric: took 4.29140643s to wait for elevateKubeSystemPrivileges
	I1202 21:09:26.041895  448211 kubeadm.go:403] duration metric: took 19.90491098s to StartCluster
	I1202 21:09:26.041913  448211 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:26.042032  448211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:09:26.042409  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:26.042611  448211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:09:26.042792  448211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 21:09:26.043069  448211 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:09:26.043103  448211 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 21:09:26.043180  448211 addons.go:70] Setting yakd=true in profile "addons-656754"
	I1202 21:09:26.043194  448211 addons.go:239] Setting addon yakd=true in "addons-656754"
	I1202 21:09:26.043216  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.043723  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.044292  448211 addons.go:70] Setting inspektor-gadget=true in profile "addons-656754"
	I1202 21:09:26.044323  448211 addons.go:239] Setting addon inspektor-gadget=true in "addons-656754"
	I1202 21:09:26.044350  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.044819  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.044961  448211 addons.go:70] Setting metrics-server=true in profile "addons-656754"
	I1202 21:09:26.045005  448211 addons.go:239] Setting addon metrics-server=true in "addons-656754"
	I1202 21:09:26.045034  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.045446  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.045924  448211 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-656754"
	I1202 21:09:26.045952  448211 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-656754"
	I1202 21:09:26.045992  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.046472  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.047617  448211 addons.go:70] Setting cloud-spanner=true in profile "addons-656754"
	I1202 21:09:26.047658  448211 addons.go:239] Setting addon cloud-spanner=true in "addons-656754"
	I1202 21:09:26.047697  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.048217  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.048381  448211 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-656754"
	I1202 21:09:26.048398  448211 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-656754"
	I1202 21:09:26.048421  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.048823  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.054557  448211 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-656754"
	I1202 21:09:26.054638  448211 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-656754"
	I1202 21:09:26.054675  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.055235  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.063737  448211 addons.go:70] Setting registry=true in profile "addons-656754"
	I1202 21:09:26.063819  448211 addons.go:239] Setting addon registry=true in "addons-656754"
	I1202 21:09:26.063889  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.064418  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.067208  448211 addons.go:70] Setting default-storageclass=true in profile "addons-656754"
	I1202 21:09:26.067263  448211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-656754"
	I1202 21:09:26.067769  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.082288  448211 addons.go:70] Setting registry-creds=true in profile "addons-656754"
	I1202 21:09:26.082323  448211 addons.go:239] Setting addon registry-creds=true in "addons-656754"
	I1202 21:09:26.082365  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.082844  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.087065  448211 addons.go:70] Setting gcp-auth=true in profile "addons-656754"
	I1202 21:09:26.087107  448211 mustload.go:66] Loading cluster: addons-656754
	I1202 21:09:26.087441  448211 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:09:26.087694  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.097006  448211 addons.go:70] Setting storage-provisioner=true in profile "addons-656754"
	I1202 21:09:26.097040  448211 addons.go:239] Setting addon storage-provisioner=true in "addons-656754"
	I1202 21:09:26.097080  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.097566  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.098434  448211 addons.go:70] Setting ingress=true in profile "addons-656754"
	I1202 21:09:26.098460  448211 addons.go:239] Setting addon ingress=true in "addons-656754"
	I1202 21:09:26.098507  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.098925  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.120724  448211 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-656754"
	I1202 21:09:26.120758  448211 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-656754"
	I1202 21:09:26.121099  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.122719  448211 addons.go:70] Setting ingress-dns=true in profile "addons-656754"
	I1202 21:09:26.122748  448211 addons.go:239] Setting addon ingress-dns=true in "addons-656754"
	I1202 21:09:26.122790  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.123361  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.147366  448211 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:26.148353  448211 addons.go:70] Setting volcano=true in profile "addons-656754"
	I1202 21:09:26.148393  448211 addons.go:239] Setting addon volcano=true in "addons-656754"
	I1202 21:09:26.148429  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.149619  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.151790  448211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:26.188853  448211 addons.go:70] Setting volumesnapshots=true in profile "addons-656754"
	I1202 21:09:26.188888  448211 addons.go:239] Setting addon volumesnapshots=true in "addons-656754"
	I1202 21:09:26.188922  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.189424  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.219511  448211 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 21:09:26.289505  448211 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 21:09:26.330753  448211 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 21:09:26.332507  448211 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 21:09:26.332536  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 21:09:26.332597  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.335596  448211 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 21:09:26.336218  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 21:09:26.336299  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.350924  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.357897  448211 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 21:09:26.358218  448211 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 21:09:26.361693  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 21:09:26.361719  448211 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 21:09:26.361788  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.362032  448211 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 21:09:26.362075  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 21:09:26.362158  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.381754  448211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:26.384723  448211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:26.384748  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:26.384814  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.388082  448211 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 21:09:26.390947  448211 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 21:09:26.393730  448211 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 21:09:26.393752  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 21:09:26.393820  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.401408  448211 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 21:09:26.404466  448211 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 21:09:26.404491  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 21:09:26.404563  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.405573  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 21:09:26.405590  448211 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 21:09:26.405653  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.421009  448211 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 21:09:26.424830  448211 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 21:09:26.424854  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 21:09:26.424933  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.433475  448211 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-656754"
	I1202 21:09:26.433520  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.433934  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	W1202 21:09:26.441837  448211 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 21:09:26.445221  448211 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 21:09:26.445436  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 21:09:26.455436  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 21:09:26.460711  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 21:09:26.460798  448211 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 21:09:26.460919  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.481418  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 21:09:26.481955  448211 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 21:09:26.481982  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 21:09:26.482047  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.485088  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 21:09:26.504452  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 21:09:26.515372  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 21:09:26.519839  448211 addons.go:239] Setting addon default-storageclass=true in "addons-656754"
	I1202 21:09:26.519882  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.520331  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.528595  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.549265  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 21:09:26.549680  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 21:09:26.550952  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.561577  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 21:09:26.567176  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 21:09:26.573913  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 21:09:26.583118  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 21:09:26.583146  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 21:09:26.583217  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.587050  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 21:09:26.587258  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.598423  448211 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 21:09:26.598505  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 21:09:26.598630  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.642863  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.644879  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.659991  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.671303  448211 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 21:09:26.674256  448211 out.go:179]   - Using image docker.io/busybox:stable
	I1202 21:09:26.677023  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.677406  448211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 21:09:26.677421  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 21:09:26.677983  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.678348  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.685013  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.724777  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.727201  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.745943  448211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:26.745967  448211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:26.746028  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.784208  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.785663  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.791751  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.804057  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:27.053623  448211 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.010802597s)
	I1202 21:09:27.053636  448211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:27.053848  448211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 21:09:27.208181  448211 node_ready.go:35] waiting up to 6m0s for node "addons-656754" to be "Ready" ...
	I1202 21:09:27.326760  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 21:09:27.326785  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 21:09:27.345695  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 21:09:27.345726  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 21:09:27.359933  448211 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 21:09:27.359970  448211 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 21:09:27.370794  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 21:09:27.377137  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 21:09:27.377165  448211 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 21:09:27.406489  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 21:09:27.419043  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:27.483156  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 21:09:27.483199  448211 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 21:09:27.497638  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 21:09:27.497664  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 21:09:27.514286  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 21:09:27.546232  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 21:09:27.571950  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 21:09:27.571986  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 21:09:27.584233  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 21:09:27.584270  448211 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 21:09:27.587417  448211 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 21:09:27.587447  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 21:09:27.602622  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.605349  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 21:09:27.605374  448211 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 21:09:27.605712  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 21:09:27.610499  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 21:09:27.630191  448211 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 21:09:27.630262  448211 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 21:09:27.632503  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 21:09:27.632566  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 21:09:27.649254  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 21:09:27.668493  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 21:09:27.680449  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 21:09:27.727469  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 21:09:27.745054  448211 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 21:09:27.745119  448211 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 21:09:27.789749  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 21:09:27.789825  448211 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 21:09:27.926584  448211 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 21:09:27.926656  448211 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 21:09:27.951425  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 21:09:27.951491  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 21:09:27.985416  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 21:09:27.985482  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 21:09:28.164889  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 21:09:28.166759  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 21:09:28.166816  448211 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 21:09:28.169023  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 21:09:28.169084  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 21:09:28.309761  448211 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 21:09:28.309824  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 21:09:28.424164  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 21:09:28.424226  448211 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 21:09:28.584293  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 21:09:28.664643  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 21:09:28.664721  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 21:09:28.911983  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 21:09:28.912088  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 21:09:29.136439  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 21:09:29.136501  448211 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1202 21:09:29.226244  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:29.377185  448211 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.323310085s)
	I1202 21:09:29.377260  448211 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 21:09:29.429629  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 21:09:29.893626  448211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-656754" context rescaled to 1 replicas
	W1202 21:09:31.254253  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:31.480781  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.109951054s)
	I1202 21:09:31.480899  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.074386859s)
	I1202 21:09:31.480932  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.06185483s)
	I1202 21:09:31.480984  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.966676196s)
	I1202 21:09:31.481063  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.878419367s)
	I1202 21:09:31.481082  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.875354445s)
	I1202 21:09:31.481099  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.870574786s)
	I1202 21:09:31.481117  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.934761677s)
	W1202 21:09:31.568848  448211 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1202 21:09:32.239273  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.589939086s)
	I1202 21:09:32.239567  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.512027594s)
	I1202 21:09:32.239593  448211 addons.go:495] Verifying addon metrics-server=true in "addons-656754"
	I1202 21:09:32.239570  448211 addons.go:495] Verifying addon ingress=true in "addons-656754"
	I1202 21:09:32.239651  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.074687454s)
	I1202 21:09:32.239517  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.558996838s)
	I1202 21:09:32.239958  448211 addons.go:495] Verifying addon registry=true in "addons-656754"
	I1202 21:09:32.239489  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.570931567s)
	I1202 21:09:32.243514  448211 out.go:179] * Verifying registry addon...
	I1202 21:09:32.243522  448211 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-656754 service yakd-dashboard -n yakd-dashboard
	
	I1202 21:09:32.243643  448211 out.go:179] * Verifying ingress addon...
	I1202 21:09:32.247215  448211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 21:09:32.248921  448211 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 21:09:32.258910  448211 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 21:09:32.258934  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:32.259756  448211 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 21:09:32.259776  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:32.329857  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.745467828s)
	W1202 21:09:32.329936  448211 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 21:09:32.329974  448211 retry.go:31] will retry after 180.889217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 21:09:32.511831  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 21:09:32.595901  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.166186746s)
	I1202 21:09:32.595930  448211 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-656754"
	I1202 21:09:32.598966  448211 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 21:09:32.602745  448211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 21:09:32.620952  448211 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 21:09:32.621019  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:32.758629  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:32.759207  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:33.106245  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:33.251902  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:33.252533  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:33.606481  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:33.711559  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:33.750897  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:33.752816  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:34.032055  448211 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 21:09:34.032165  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:34.051075  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:34.107262  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:34.184803  448211 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 21:09:34.197890  448211 addons.go:239] Setting addon gcp-auth=true in "addons-656754"
	I1202 21:09:34.197939  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:34.198385  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:34.217926  448211 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 21:09:34.217980  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:34.239513  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:34.252237  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:34.252879  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:34.606469  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:34.750356  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:34.752435  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:35.106955  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:35.252265  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:35.252527  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:35.312595  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.800673169s)
	I1202 21:09:35.312655  448211 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.094704805s)
	I1202 21:09:35.315984  448211 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 21:09:35.318817  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 21:09:35.321602  448211 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 21:09:35.321625  448211 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 21:09:35.336229  448211 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 21:09:35.336255  448211 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 21:09:35.350325  448211 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 21:09:35.350347  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 21:09:35.363375  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 21:09:35.607261  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:35.757116  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:35.757541  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:35.882846  448211 addons.go:495] Verifying addon gcp-auth=true in "addons-656754"
	I1202 21:09:35.886559  448211 out.go:179] * Verifying gcp-auth addon...
	I1202 21:09:35.890259  448211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 21:09:35.894296  448211 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 21:09:35.894367  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:36.106646  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:36.211413  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:36.250517  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:36.253006  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:36.393720  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:36.605717  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:36.750837  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:36.753721  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:36.894804  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:37.105855  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:37.250518  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:37.251645  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:37.393489  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:37.606422  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:37.750224  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:37.752584  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:37.893509  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:38.106508  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:38.211502  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:38.250183  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:38.252794  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:38.393596  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:38.606688  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:38.751817  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:38.753124  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:38.893504  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:39.106612  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:39.251415  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:39.251900  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:39.394303  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:39.606542  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:39.750365  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:39.752412  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:39.893842  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:40.105646  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:40.250072  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:40.252127  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:40.393834  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:40.605849  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:40.711590  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:40.750599  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:40.752903  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:40.894105  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:41.105773  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:41.250951  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:41.252561  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:41.393630  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:41.606387  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:41.750449  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:41.752631  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:41.893415  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:42.108778  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:42.251554  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:42.252340  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:42.393182  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:42.606209  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:42.750687  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:42.751458  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:42.893846  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:43.105599  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:43.211551  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:43.250399  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:43.253057  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:43.393307  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:43.606564  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:43.750461  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:43.752289  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:43.893129  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:44.106458  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:44.250210  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:44.252510  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:44.393320  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:44.606334  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:44.751806  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:44.751899  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:44.894080  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:45.107230  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:45.214458  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:45.250984  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:45.253864  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:45.394159  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:45.606653  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:45.750176  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:45.752523  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:45.893302  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:46.107922  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:46.250363  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:46.252366  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:46.393652  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:46.605589  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:46.750300  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:46.752577  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:46.893586  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:47.105607  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:47.250527  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:47.252564  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:47.393755  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:47.609880  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:47.711664  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:47.750286  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:47.751623  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:47.893324  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:48.106119  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:48.250488  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:48.251833  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:48.394044  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:48.605621  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:48.751812  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:48.752030  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:48.894778  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:49.105694  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:49.250103  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:49.252032  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:49.395504  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:49.606427  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:49.751033  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:49.752135  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:49.894473  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:50.106595  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:50.211507  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:50.251625  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:50.252406  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:50.393142  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:50.606311  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:50.750847  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:50.751421  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:50.893994  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:51.105905  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:51.250567  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:51.252554  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:51.394618  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:51.607127  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:51.750544  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:51.751533  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:51.893831  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:52.105671  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:52.211596  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:52.251622  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:52.253262  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:52.394477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:52.605860  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:52.750205  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:52.752170  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:52.893141  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:53.105819  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:53.250361  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:53.252212  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:53.393269  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:53.606356  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:53.751604  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:53.752114  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:53.893936  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:54.105881  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:54.211854  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:54.250762  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:54.251451  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:54.393759  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:54.605513  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:54.750346  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:54.752136  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:54.893262  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:55.113878  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:55.250796  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:55.251525  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:55.393612  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:55.606528  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:55.751040  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:55.752084  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:55.894041  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:56.106177  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:56.215923  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:56.250575  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:56.251890  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:56.395412  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:56.606126  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:56.750472  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:56.751826  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:56.893655  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:57.106657  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:57.251175  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:57.251997  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:57.394192  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:57.606162  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:57.750828  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:57.751562  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:57.893283  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:58.106301  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:58.251574  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:58.251635  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:58.393871  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:58.605549  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:58.711463  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:58.750268  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:58.752408  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:58.893340  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:59.106333  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:59.251713  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:59.252326  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:59.393318  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:59.606038  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:59.750777  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:59.754331  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:59.893544  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:00.108016  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:00.252195  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:00.269625  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:00.394828  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:00.605951  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:10:00.712418  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:10:00.750260  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:00.752615  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:00.893744  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:01.105786  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:01.251333  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:01.251800  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:01.393836  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:01.606411  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:01.750582  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:01.753376  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:01.893611  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:02.107086  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:02.251059  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:02.253524  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:02.393477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:02.607489  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:02.750323  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:02.752488  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:02.893672  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:03.106806  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:10:03.211584  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:10:03.250619  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:03.252833  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:03.394290  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:03.606786  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:03.750285  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:03.752197  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:03.893172  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:04.105959  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:04.251609  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:04.252254  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:04.393213  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:04.606238  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:04.751448  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:04.753570  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:04.893583  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:05.105745  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:05.251201  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:05.252539  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:05.393702  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:05.606732  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:10:05.711750  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:10:05.750682  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:05.751727  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:05.893642  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:06.106595  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:06.250181  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:06.252134  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:06.393184  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:06.606254  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:06.750181  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:06.752493  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:06.893741  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:07.105752  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:07.250806  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:07.252798  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:07.393683  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:07.606715  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:07.744905  448211 node_ready.go:49] node "addons-656754" is "Ready"
	I1202 21:10:07.744937  448211 node_ready.go:38] duration metric: took 40.536721997s for node "addons-656754" to be "Ready" ...
	I1202 21:10:07.744951  448211 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:10:07.745019  448211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:10:07.775481  448211 api_server.go:72] duration metric: took 41.732828612s to wait for apiserver process to appear ...
	I1202 21:10:07.775508  448211 api_server.go:88] waiting for apiserver healthz status ...
	I1202 21:10:07.775528  448211 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 21:10:07.778303  448211 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 21:10:07.778327  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:07.778470  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:07.789029  448211 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 21:10:07.798627  448211 api_server.go:141] control plane version: v1.34.2
	I1202 21:10:07.798659  448211 api_server.go:131] duration metric: took 23.143958ms to wait for apiserver health ...
	I1202 21:10:07.798669  448211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 21:10:07.830103  448211 system_pods.go:59] 19 kube-system pods found
	I1202 21:10:07.830140  448211 system_pods.go:61] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending
	I1202 21:10:07.830147  448211 system_pods.go:61] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:07.830151  448211 system_pods.go:61] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending
	I1202 21:10:07.830156  448211 system_pods.go:61] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending
	I1202 21:10:07.830159  448211 system_pods.go:61] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:07.830164  448211 system_pods.go:61] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:07.830167  448211 system_pods.go:61] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:07.830171  448211 system_pods.go:61] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:07.830175  448211 system_pods.go:61] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending
	I1202 21:10:07.830180  448211 system_pods.go:61] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:07.830184  448211 system_pods.go:61] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:07.830188  448211 system_pods.go:61] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending
	I1202 21:10:07.830195  448211 system_pods.go:61] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:07.830199  448211 system_pods.go:61] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending
	I1202 21:10:07.830203  448211 system_pods.go:61] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending
	I1202 21:10:07.830213  448211 system_pods.go:61] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending
	I1202 21:10:07.830216  448211 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending
	I1202 21:10:07.830220  448211 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending
	I1202 21:10:07.830223  448211 system_pods.go:61] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending
	I1202 21:10:07.830237  448211 system_pods.go:74] duration metric: took 31.560606ms to wait for pod list to return data ...
	I1202 21:10:07.830245  448211 default_sa.go:34] waiting for default service account to be created ...
	I1202 21:10:07.838601  448211 default_sa.go:45] found service account: "default"
	I1202 21:10:07.838635  448211 default_sa.go:55] duration metric: took 8.380143ms for default service account to be created ...
	I1202 21:10:07.838647  448211 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 21:10:07.850634  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:07.850663  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending
	I1202 21:10:07.850669  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:07.850673  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending
	I1202 21:10:07.850677  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending
	I1202 21:10:07.850682  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:07.850686  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:07.850690  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:07.850695  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:07.850699  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending
	I1202 21:10:07.850703  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:07.850708  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:07.850713  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending
	I1202 21:10:07.850725  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:07.850734  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:07.850742  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending
	I1202 21:10:07.850748  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending
	I1202 21:10:07.850751  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending
	I1202 21:10:07.850762  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending
	I1202 21:10:07.850765  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending
	I1202 21:10:07.850778  448211 retry.go:31] will retry after 298.887349ms: missing components: kube-dns
	I1202 21:10:07.898231  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:08.117429  448211 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 21:10:08.117451  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:08.161398  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:08.161443  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:10:08.161451  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:08.161457  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending
	I1202 21:10:08.161462  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending
	I1202 21:10:08.161466  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:08.161472  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:08.161480  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:08.161488  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:08.161495  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:08.161501  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:08.161506  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:08.161517  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending
	I1202 21:10:08.161522  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:08.161527  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:08.161537  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending
	I1202 21:10:08.161542  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending
	I1202 21:10:08.161546  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending
	I1202 21:10:08.161556  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending
	I1202 21:10:08.161561  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 21:10:08.161576  448211 retry.go:31] will retry after 322.72241ms: missing components: kube-dns
	I1202 21:10:08.260579  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:08.269334  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:08.405765  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:08.498989  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:08.499045  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:10:08.499052  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:08.499060  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 21:10:08.499066  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 21:10:08.499071  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:08.499078  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:08.499086  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:08.499091  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:08.499097  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:08.499106  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:08.499111  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:08.499118  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 21:10:08.499128  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:08.499134  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:08.499140  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 21:10:08.499150  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 21:10:08.499157  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.499166  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.499174  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 21:10:08.499188  448211 retry.go:31] will retry after 379.485511ms: missing components: kube-dns
	I1202 21:10:08.607041  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:08.757542  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:08.757863  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:08.885214  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:08.885253  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:10:08.885263  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 21:10:08.885271  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 21:10:08.885279  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 21:10:08.885283  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:08.885288  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:08.885293  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:08.885297  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:08.885305  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:08.885308  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:08.885313  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:08.885330  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 21:10:08.885341  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 21:10:08.885350  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:08.885361  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 21:10:08.885367  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 21:10:08.885379  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.885385  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.885391  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 21:10:08.885406  448211 retry.go:31] will retry after 462.835389ms: missing components: kube-dns
	I1202 21:10:08.894029  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:09.107030  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:09.250641  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:09.252051  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:09.352064  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:09.352103  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Running
	I1202 21:10:09.352123  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 21:10:09.352130  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 21:10:09.352137  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 21:10:09.352142  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:09.352146  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:09.352151  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:09.352155  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:09.352161  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:09.352165  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:09.352170  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:09.352176  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 21:10:09.352187  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 21:10:09.352193  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:09.352199  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 21:10:09.352207  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 21:10:09.352213  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:09.352219  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:09.352226  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Running
	I1202 21:10:09.352235  448211 system_pods.go:126] duration metric: took 1.513581205s to wait for k8s-apps to be running ...
	I1202 21:10:09.352246  448211 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 21:10:09.352303  448211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:10:09.365599  448211 system_svc.go:56] duration metric: took 13.344222ms WaitForService to wait for kubelet
	I1202 21:10:09.365686  448211 kubeadm.go:587] duration metric: took 43.323050112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:10:09.365711  448211 node_conditions.go:102] verifying NodePressure condition ...
	I1202 21:10:09.368551  448211 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 21:10:09.368582  448211 node_conditions.go:123] node cpu capacity is 2
	I1202 21:10:09.368596  448211 node_conditions.go:105] duration metric: took 2.878795ms to run NodePressure ...
	I1202 21:10:09.368609  448211 start.go:242] waiting for startup goroutines ...
	I1202 21:10:09.393620  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:09.606258  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:09.751670  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:09.753196  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:09.895118  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:10.106888  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:10.253076  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:10.253622  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:10.396497  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:10.607091  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:10.753316  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:10.753569  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:10.893917  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:11.106477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:11.251477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:11.254565  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:11.393584  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:11.607615  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:11.753237  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:11.753960  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:11.894137  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:12.106658  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:12.251993  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:12.253613  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:12.393910  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:12.606379  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:12.750363  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:12.752766  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:12.894855  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:13.106546  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:13.250726  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:13.253401  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:13.394066  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:13.606454  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:13.750748  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:13.753852  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:13.894345  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:14.107093  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:14.253092  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:14.255345  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:14.393279  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:14.606620  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:14.753609  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:14.753794  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:14.894275  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:15.119375  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:15.251661  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:15.253724  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:15.394039  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:15.613339  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:15.760215  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:15.760653  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:15.894889  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:16.110229  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:16.256911  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:16.257303  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:16.393823  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:16.607533  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:16.750484  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:16.754120  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:16.894579  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:17.130185  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:17.277653  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:17.278079  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:17.401029  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:17.606246  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:17.751146  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:17.756643  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:17.894037  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:18.106865  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:18.253130  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:18.253299  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:18.393605  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:18.607170  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:18.751573  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:18.752247  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:18.893002  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:19.106778  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:19.253370  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:19.254495  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:19.394549  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:19.607704  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:19.753895  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:19.754280  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:19.894940  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:20.111418  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:20.250992  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:20.251809  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:20.393819  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:20.606504  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:20.751564  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:20.753135  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:20.894429  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:21.111525  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:21.251143  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:21.252215  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:21.394206  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:21.606762  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:21.752369  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:21.752929  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:21.893765  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:22.105986  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:22.251529  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:22.253892  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:22.398487  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:22.607472  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:22.752198  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:22.754080  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:22.894288  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:23.106913  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:23.252079  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:23.252560  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:23.393481  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:23.609184  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:23.750300  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:23.752498  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:23.893818  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:24.108277  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:24.250382  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:24.252607  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:24.393902  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:24.607335  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:24.750709  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:24.753924  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:24.894015  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:25.108082  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:25.252099  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:25.254337  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:25.393899  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:25.608776  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:25.753876  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:25.754666  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:25.895080  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:26.130961  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:26.266676  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:26.266784  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:26.397905  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:26.607191  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:26.750902  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:26.752782  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:26.893788  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:27.108038  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:27.257417  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:27.257585  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:27.393973  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:27.608805  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:27.757704  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:27.758205  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:27.893996  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:28.106763  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:28.253390  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:28.253865  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:28.395236  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:28.607094  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:28.760039  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:28.760623  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:28.894408  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:29.107309  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:29.260275  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:29.260676  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:29.393891  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:29.606822  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:29.751594  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:29.752896  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:29.894786  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:30.107145  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:30.250588  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:30.253217  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:30.394373  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:30.607263  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:30.751404  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:30.753175  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:30.894254  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:31.107074  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:31.253124  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:31.253446  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:31.393709  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:31.606531  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:31.751593  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:31.753170  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:31.895179  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:32.107632  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:32.252495  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:32.252802  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:32.394167  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:32.612343  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:32.752352  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:32.753770  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:32.893962  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:33.106586  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:33.251585  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:33.253622  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:33.394896  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:33.607576  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:33.750041  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:33.752686  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:33.893656  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:34.106004  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:34.252060  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:34.252272  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:34.393827  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:34.606578  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:34.751779  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:34.752074  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:34.893767  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:35.106028  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:35.253195  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:35.253394  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:35.393359  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:35.606238  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:35.751348  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:35.753259  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:35.893398  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:36.107094  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:36.249806  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:36.252166  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:36.393066  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:36.606667  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:36.750904  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:36.751919  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:36.894493  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:37.106960  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:37.251071  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:37.253759  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:37.399395  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:37.606834  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:37.751514  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:37.752267  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:37.893243  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:38.106762  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:38.250988  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:38.253481  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:38.393961  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:38.608605  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:38.751642  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:38.752376  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:38.893497  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:39.106565  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:39.250824  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:39.252652  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:39.393219  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:39.606162  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:39.751529  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:39.752289  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:39.893234  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:40.107330  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:40.251917  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:40.254276  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:40.395637  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:40.606118  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:40.750554  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:40.752430  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:40.893181  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:41.105900  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:41.258159  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:41.258565  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:41.393406  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:41.606380  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:41.750111  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:41.752438  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:41.893789  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:42.106733  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:42.251475  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:42.252052  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:42.395654  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:42.606107  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:42.749993  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:42.752373  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:42.894324  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:43.107756  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:43.257255  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:43.259200  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:43.394274  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:43.607058  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:43.750989  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:43.753501  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:43.893215  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:44.106748  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:44.256115  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:44.256143  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:44.393965  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:44.606677  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:44.752474  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:44.752685  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:44.893684  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:45.107746  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:45.253176  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:45.253606  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:45.394101  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:45.606613  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:45.752852  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:45.753887  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:45.894217  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:46.107526  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:46.252726  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:46.253550  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:46.394164  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:46.611148  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:46.753693  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:46.754970  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:46.894649  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:47.106764  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:47.252601  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:47.252704  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:47.393752  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:47.607656  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:47.750735  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:47.753047  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:47.894355  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:48.107266  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:48.250780  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:48.252243  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:48.393991  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:48.606243  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:48.750698  448211 kapi.go:107] duration metric: took 1m16.503495075s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 21:10:48.752908  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:48.893692  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:49.106891  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:49.252941  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:49.394290  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:49.606880  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:49.752361  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:49.893789  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:50.106484  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:50.252555  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:50.393460  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:50.607139  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:50.753151  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:50.893359  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:51.108386  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:51.253159  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:51.394889  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:51.607332  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:51.753447  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:51.893783  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:52.107840  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:52.251983  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:52.403488  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:52.621476  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:52.753925  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:52.896773  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:53.107312  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:53.253616  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:53.394650  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:53.605756  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:53.751951  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:53.893920  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:54.106867  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:54.251825  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:54.393709  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:54.606827  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:54.753452  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:54.893241  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:55.107167  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:55.256179  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:55.393676  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:55.605952  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:55.752206  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:55.893819  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:56.106248  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:56.252343  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:56.394925  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:56.607073  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:56.752679  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:56.893460  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:57.107070  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:57.252459  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:57.394945  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:57.606630  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:57.753173  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:57.895458  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:58.108339  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:58.252679  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:58.395252  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:58.607081  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:58.752530  448211 kapi.go:107] duration metric: took 1m26.503605645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 21:10:58.893818  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:59.106404  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:59.394145  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:59.606257  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:59.893918  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:00.107978  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:00.396258  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:00.607267  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:00.895392  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:01.107262  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:01.403449  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:01.606877  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:01.894434  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:02.106878  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:02.393863  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:02.606588  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:02.894539  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:03.105865  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:03.393314  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:03.606612  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:03.893823  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:04.106054  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:04.393228  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:04.614896  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:04.894487  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:05.106948  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:05.393802  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:05.606883  448211 kapi.go:107] duration metric: took 1m33.004138306s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 21:11:05.898562  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:06.394344  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:06.893843  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:07.394123  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:07.893143  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:08.393701  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:08.894325  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:09.393814  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:09.894318  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:10.405311  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:10.894431  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:11.394400  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:11.894097  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:12.393818  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:12.893431  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:13.394284  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:13.893751  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:14.397300  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:14.894006  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:15.393721  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:15.893545  448211 kapi.go:107] duration metric: took 1m40.003286776s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 21:11:15.896512  448211 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-656754 cluster.
	I1202 21:11:15.899361  448211 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 21:11:15.902171  448211 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 21:11:15.905287  448211 out.go:179] * Enabled addons: inspektor-gadget, nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, cloud-spanner, registry-creds, storage-provisioner-rancher, metrics-server, ingress-dns, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1202 21:11:15.908139  448211 addons.go:530] duration metric: took 1m49.865030127s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin storage-provisioner amd-gpu-device-plugin cloud-spanner registry-creds storage-provisioner-rancher metrics-server ingress-dns yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1202 21:11:15.908194  448211 start.go:247] waiting for cluster config update ...
	I1202 21:11:15.908219  448211 start.go:256] writing updated cluster config ...
	I1202 21:11:15.908519  448211 ssh_runner.go:195] Run: rm -f paused
	I1202 21:11:15.913699  448211 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 21:11:15.916933  448211 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2bvm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.921191  448211 pod_ready.go:94] pod "coredns-66bc5c9577-2bvm4" is "Ready"
	I1202 21:11:15.921217  448211 pod_ready.go:86] duration metric: took 4.258354ms for pod "coredns-66bc5c9577-2bvm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.923383  448211 pod_ready.go:83] waiting for pod "etcd-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.927785  448211 pod_ready.go:94] pod "etcd-addons-656754" is "Ready"
	I1202 21:11:15.927813  448211 pod_ready.go:86] duration metric: took 4.403275ms for pod "etcd-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.929950  448211 pod_ready.go:83] waiting for pod "kube-apiserver-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.936054  448211 pod_ready.go:94] pod "kube-apiserver-addons-656754" is "Ready"
	I1202 21:11:15.936080  448211 pod_ready.go:86] duration metric: took 6.095011ms for pod "kube-apiserver-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.938108  448211 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:16.317100  448211 pod_ready.go:94] pod "kube-controller-manager-addons-656754" is "Ready"
	I1202 21:11:16.317133  448211 pod_ready.go:86] duration metric: took 379.000587ms for pod "kube-controller-manager-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:16.518509  448211 pod_ready.go:83] waiting for pod "kube-proxy-zqc2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:16.917654  448211 pod_ready.go:94] pod "kube-proxy-zqc2s" is "Ready"
	I1202 21:11:16.917685  448211 pod_ready.go:86] duration metric: took 399.147304ms for pod "kube-proxy-zqc2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:17.118126  448211 pod_ready.go:83] waiting for pod "kube-scheduler-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:17.517697  448211 pod_ready.go:94] pod "kube-scheduler-addons-656754" is "Ready"
	I1202 21:11:17.517724  448211 pod_ready.go:86] duration metric: took 399.569338ms for pod "kube-scheduler-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:17.517739  448211 pod_ready.go:40] duration metric: took 1.604009204s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 21:11:17.581683  448211 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 21:11:17.585158  448211 out.go:179] * Done! kubectl is now configured to use "addons-656754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 21:14:34 addons-656754 crio[828]: time="2025-12-02T21:14:34.229587092Z" level=info msg="Removed container 4cbc27327c4d173ab9b7fbdd071af4609ace7ef7dd7643dbd6f485e417ba29a8: kube-system/registry-creds-764b6fb674-bgqc9/registry-creds" id=a6706430-4cac-4a6b-80d3-a492363939e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.312628785Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-kc668/POD" id=ffb345a4-ea77-44d4-814c-66f1475da4d2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.312721857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.335965417Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kc668 Namespace:default ID:09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273 UID:3a8b0f27-1f8c-4b01-ad22-81c7426e0346 NetNS:/var/run/netns/71688b4e-3f73-4b6b-a4d4-e9d239e01309 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b250}] Aliases:map[]}"
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.336024323Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-kc668 to CNI network \"kindnet\" (type=ptp)"
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.352122263Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kc668 Namespace:default ID:09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273 UID:3a8b0f27-1f8c-4b01-ad22-81c7426e0346 NetNS:/var/run/netns/71688b4e-3f73-4b6b-a4d4-e9d239e01309 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b250}] Aliases:map[]}"
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.352283856Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-kc668 for CNI network kindnet (type=ptp)"
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.362024451Z" level=info msg="Ran pod sandbox 09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273 with infra container: default/hello-world-app-5d498dc89-kc668/POD" id=ffb345a4-ea77-44d4-814c-66f1475da4d2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.363330073Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3881790f-c686-433e-8ec3-aff4d5e2b5a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.363509891Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3881790f-c686-433e-8ec3-aff4d5e2b5a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.363573383Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3881790f-c686-433e-8ec3-aff4d5e2b5a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.366017832Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=cd48bbf1-85d4-4021-b7e3-32c75778494b name=/runtime.v1.ImageService/PullImage
	Dec 02 21:14:42 addons-656754 crio[828]: time="2025-12-02T21:14:42.370420514Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.163894645Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=cd48bbf1-85d4-4021-b7e3-32c75778494b name=/runtime.v1.ImageService/PullImage
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.164693529Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c6d911f7-c93a-4dfb-9054-b856c7b24047 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.167996056Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e19f71b7-a763-48b2-aaae-ba09e6516685 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.175279163Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-kc668/hello-world-app" id=cb21b97a-1baa-478d-83f4-289d1a59807f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.175578054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.188882044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.189227746Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5b648a1700af386da2d167fbb90c052808b4637583d09030004c116cc94ee866/merged/etc/passwd: no such file or directory"
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.189325085Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5b648a1700af386da2d167fbb90c052808b4637583d09030004c116cc94ee866/merged/etc/group: no such file or directory"
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.189675587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.213709548Z" level=info msg="Created container 08a01fe9953e67b88c180eea66f51f286a14ab29708c8845bb23ffd03cafdf1d: default/hello-world-app-5d498dc89-kc668/hello-world-app" id=cb21b97a-1baa-478d-83f4-289d1a59807f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.216974634Z" level=info msg="Starting container: 08a01fe9953e67b88c180eea66f51f286a14ab29708c8845bb23ffd03cafdf1d" id=832ac168-ee0a-4065-bcd7-22b57a4beff0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 21:14:43 addons-656754 crio[828]: time="2025-12-02T21:14:43.22027108Z" level=info msg="Started container" PID=7213 containerID=08a01fe9953e67b88c180eea66f51f286a14ab29708c8845bb23ffd03cafdf1d description=default/hello-world-app-5d498dc89-kc668/hello-world-app id=832ac168-ee0a-4065-bcd7-22b57a4beff0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	08a01fe9953e6       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   09e045e13e298       hello-world-app-5d498dc89-kc668            default
	9d64edb5134c4       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           2                   bd5ac82d54dd1       registry-creds-764b6fb674-bgqc9            kube-system
	c48133dfedd43       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   abb1243b3588e       nginx                                      default
	dd3c5e853140d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   b184bbc27f328       busybox                                    default
	058e26f7cd421       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ca9df7a63d8ac       gcp-auth-78565c9fb4-qclvf                  gcp-auth
	bbdebabfcb42b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                   kube-system
	f21eb5e720d99       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                   kube-system
	7fb33e06679de       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                   kube-system
	2a54239c986b6       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                   kube-system
	c7438d482556e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                   kube-system
	b5bf09863cdee       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   7665f9a4fb4d8       ingress-nginx-controller-6c8bf45fb-vdzzc   ingress-nginx
	075f5a4ebaab7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   f6b4ce447dcc8       gadget-qk5vw                               gadget
	1047a51792cd7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   617c5d06767e2       registry-proxy-2zlcv                       kube-system
	a0bf837335fef       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   f1fa50166ee69       csi-hostpath-attacher-0                    kube-system
	9c0b0e08f43aa       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             4 minutes ago            Exited              patch                                    2                   92966298d3b25       ingress-nginx-admission-patch-2fnsb        ingress-nginx
	7f1f14868c074       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   c2e307d3f8155       snapshot-controller-7d9fbc56b8-cgbl5       kube-system
	eadd870941895       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                   kube-system
	6b9323a78a161       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   5b168a0bfa7fa       snapshot-controller-7d9fbc56b8-2fl6z       kube-system
	346d71544b514       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   d581658c076a4       csi-hostpath-resizer-0                     kube-system
	eac5adf21505c       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   52f0c46f32c8c       nvidia-device-plugin-daemonset-gmn2x       kube-system
	3e550292e1371       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   68efd17f7e901       registry-6b586f9694-gbhfb                  kube-system
	c6e8e65b52e2c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago            Exited              create                                   0                   3c8134dc2be06       ingress-nginx-admission-create-mt6ld       ingress-nginx
	0ea8245394cbd       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   a0351cc8a1031       yakd-dashboard-5ff678cb9-znnvc             yakd-dashboard
	27c1564d21921       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   3f0e9bc019d23       kube-ingress-dns-minikube                  kube-system
	9110e1016ee8f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   084de1db7dd14       local-path-provisioner-648f6765c9-6pxcn    local-path-storage
	a6bae13c92728       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   8f93d5c3c5c6a       metrics-server-85b7d694d7-bsktp            kube-system
	7b4811c87b3a1       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   55e15ef64d8d5       cloud-spanner-emulator-5bdddb765-qldsf     default
	8fbc644a70c18       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   ffc0db9a71f6e       coredns-66bc5c9577-2bvm4                   kube-system
	507385b0545f3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   90219c938a4de       storage-provisioner                        kube-system
	6557f84007b18       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             5 minutes ago            Running             kube-proxy                               0                   caa8eb5f927fa       kube-proxy-zqc2s                           kube-system
	4767c189dbb1d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   de085f245dcae       kindnet-gvt9x                              kube-system
	3a609b1131be3       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   66b640e50d6bd       kube-controller-manager-addons-656754      kube-system
	17a3bd5107c3d       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   d129dec847e42       etcd-addons-656754                         kube-system
	870c81e888423       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   2668be2759f0c       kube-apiserver-addons-656754               kube-system
	8ccf79252e522       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   f6a2619ed74d3       kube-scheduler-addons-656754               kube-system
	
	
	==> coredns [8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3] <==
	[INFO] 10.244.0.18:53741 - 47462 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002736795s
	[INFO] 10.244.0.18:53741 - 23775 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145462s
	[INFO] 10.244.0.18:53741 - 63222 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000140983s
	[INFO] 10.244.0.18:60337 - 56977 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151427s
	[INFO] 10.244.0.18:60337 - 57207 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187219s
	[INFO] 10.244.0.18:57145 - 4415 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113404s
	[INFO] 10.244.0.18:57145 - 4835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000279002s
	[INFO] 10.244.0.18:55259 - 54747 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091234s
	[INFO] 10.244.0.18:55259 - 54911 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082315s
	[INFO] 10.244.0.18:59313 - 60811 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.000884137s
	[INFO] 10.244.0.18:59313 - 61000 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001388036s
	[INFO] 10.244.0.18:38522 - 50221 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156384s
	[INFO] 10.244.0.18:38522 - 50090 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087049s
	[INFO] 10.244.0.21:60056 - 28281 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157113s
	[INFO] 10.244.0.21:56128 - 7187 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072329s
	[INFO] 10.244.0.21:40780 - 60258 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096739s
	[INFO] 10.244.0.21:40779 - 2755 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000064838s
	[INFO] 10.244.0.21:46290 - 6481 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008293s
	[INFO] 10.244.0.21:37281 - 63024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076809s
	[INFO] 10.244.0.21:38058 - 23165 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002161182s
	[INFO] 10.244.0.21:34499 - 16120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001543271s
	[INFO] 10.244.0.21:51715 - 41965 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000695729s
	[INFO] 10.244.0.21:55075 - 44185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002695784s
	[INFO] 10.244.0.23:38429 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000340196s
	[INFO] 10.244.0.23:42091 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012828s
	
	
	==> describe nodes <==
	Name:               addons-656754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-656754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=addons-656754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T21_09_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-656754
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-656754"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 21:09:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-656754
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 21:14:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 21:14:27 +0000   Tue, 02 Dec 2025 21:09:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 21:14:27 +0000   Tue, 02 Dec 2025 21:09:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 21:14:27 +0000   Tue, 02 Dec 2025 21:09:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 21:14:27 +0000   Tue, 02 Dec 2025 21:10:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-656754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                31dfc91e-3dfb-4d63-a545-376482e19a5f
	  Boot ID:                    c77b83b8-287c-4d91-bf3a-e2991f41400e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  default                     cloud-spanner-emulator-5bdddb765-qldsf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     hello-world-app-5d498dc89-kc668             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-qk5vw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  gcp-auth                    gcp-auth-78565c9fb4-qclvf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-vdzzc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m12s
	  kube-system                 coredns-66bc5c9577-2bvm4                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m18s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 csi-hostpathplugin-j29dk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 etcd-addons-656754                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m23s
	  kube-system                 kindnet-gvt9x                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m18s
	  kube-system                 kube-apiserver-addons-656754                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-addons-656754       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-zqc2s                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-addons-656754                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 metrics-server-85b7d694d7-bsktp             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m14s
	  kube-system                 nvidia-device-plugin-daemonset-gmn2x        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 registry-6b586f9694-gbhfb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 registry-creds-764b6fb674-bgqc9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 registry-proxy-2zlcv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-2fl6z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-cgbl5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  local-path-storage          local-path-provisioner-648f6765c9-6pxcn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-znnvc              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m16s                  kube-proxy       
	  Normal   Starting                 5m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node addons-656754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node addons-656754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m31s (x8 over 5m31s)  kubelet          Node addons-656754 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m23s                  kubelet          Node addons-656754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m23s                  kubelet          Node addons-656754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m23s                  kubelet          Node addons-656754 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m20s                  node-controller  Node addons-656754 event: Registered Node addons-656754 in Controller
	  Normal   NodeReady                4m37s                  kubelet          Node addons-656754 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 18:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6] <==
	{"level":"warn","ts":"2025-12-02T21:09:16.602876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.621523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.650146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.688522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.728039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.755353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.776825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.807244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.857566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.868169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.891237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.905790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.928266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.943231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.968936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.995983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:17.008986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:17.051367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:17.129149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:32.923907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:32.943921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.016321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.034288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.084781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.119425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [058e26f7cd4215cb1afa2046249fb1992e03c9dc587ec4e53f4ba748a2174521] <==
	2025/12/02 21:11:15 GCP Auth Webhook started!
	2025/12/02 21:11:18 Ready to marshal response ...
	2025/12/02 21:11:18 Ready to write response ...
	2025/12/02 21:11:18 Ready to marshal response ...
	2025/12/02 21:11:18 Ready to write response ...
	2025/12/02 21:11:18 Ready to marshal response ...
	2025/12/02 21:11:18 Ready to write response ...
	2025/12/02 21:11:40 Ready to marshal response ...
	2025/12/02 21:11:40 Ready to write response ...
	2025/12/02 21:11:51 Ready to marshal response ...
	2025/12/02 21:11:51 Ready to write response ...
	2025/12/02 21:11:51 Ready to marshal response ...
	2025/12/02 21:11:51 Ready to write response ...
	2025/12/02 21:11:56 Ready to marshal response ...
	2025/12/02 21:11:56 Ready to write response ...
	2025/12/02 21:12:04 Ready to marshal response ...
	2025/12/02 21:12:04 Ready to write response ...
	2025/12/02 21:12:12 Ready to marshal response ...
	2025/12/02 21:12:12 Ready to write response ...
	2025/12/02 21:12:23 Ready to marshal response ...
	2025/12/02 21:12:23 Ready to write response ...
	2025/12/02 21:14:41 Ready to marshal response ...
	2025/12/02 21:14:41 Ready to write response ...
	
	
	==> kernel <==
	 21:14:44 up  2:56,  0 user,  load average: 0.20, 1.13, 1.43
	Linux addons-656754 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513] <==
	I1202 21:12:37.039472       1 main.go:301] handling current node
	I1202 21:12:47.039972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:12:47.040008       1 main.go:301] handling current node
	I1202 21:12:57.039709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:12:57.039745       1 main.go:301] handling current node
	I1202 21:13:07.039501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:13:07.039534       1 main.go:301] handling current node
	I1202 21:13:17.039288       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:13:17.039353       1 main.go:301] handling current node
	I1202 21:13:27.039132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:13:27.039173       1 main.go:301] handling current node
	I1202 21:13:37.039502       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:13:37.039540       1 main.go:301] handling current node
	I1202 21:13:47.039164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:13:47.039208       1 main.go:301] handling current node
	I1202 21:13:57.040094       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:13:57.040128       1 main.go:301] handling current node
	I1202 21:14:07.039408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:14:07.039444       1 main.go:301] handling current node
	I1202 21:14:17.039326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:14:17.039359       1 main.go:301] handling current node
	I1202 21:14:27.039284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:14:27.039320       1 main.go:301] handling current node
	I1202 21:14:37.039923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:14:37.039957       1 main.go:301] handling current node
	
	
	==> kube-apiserver [870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754] <==
	W1202 21:09:55.016128       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:09:55.034040       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:09:55.084916       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:09:55.115672       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:10:07.673457       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.14.36:443: connect: connection refused
	E1202 21:10:07.673504       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.14.36:443: connect: connection refused" logger="UnhandledError"
	W1202 21:10:07.681579       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.14.36:443: connect: connection refused
	E1202 21:10:07.681616       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.14.36:443: connect: connection refused" logger="UnhandledError"
	W1202 21:10:07.785060       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.14.36:443: connect: connection refused
	E1202 21:10:07.785197       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.14.36:443: connect: connection refused" logger="UnhandledError"
	E1202 21:10:26.245860       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.154.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.154.108:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.154.108:443: connect: connection refused" logger="UnhandledError"
	W1202 21:10:26.246605       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 21:10:26.246668       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 21:10:26.330314       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 21:11:28.598430       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34350: use of closed network connection
	E1202 21:11:28.820675       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34372: use of closed network connection
	E1202 21:11:28.957389       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34400: use of closed network connection
	I1202 21:12:08.129631       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1202 21:12:21.764712       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1202 21:12:23.011492       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 21:12:23.317201       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.237.111"}
	I1202 21:14:42.188961       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.192.138"}
	
	
	==> kube-controller-manager [3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6] <==
	I1202 21:09:25.019530       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 21:09:25.021817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 21:09:25.021838       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 21:09:25.021847       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 21:09:25.022473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 21:09:25.022513       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 21:09:25.022882       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 21:09:25.024035       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 21:09:25.024271       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 21:09:25.024772       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 21:09:25.027265       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 21:09:25.030184       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 21:09:25.030339       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 21:09:25.030635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1202 21:09:30.539657       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1202 21:09:54.998598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 21:09:54.998766       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 21:09:54.998837       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 21:09:55.033181       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1202 21:09:55.048355       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 21:09:55.102508       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 21:09:55.151374       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 21:10:09.969131       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1202 21:10:25.109693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 21:10:25.159987       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd] <==
	I1202 21:09:26.954182       1 server_linux.go:53] "Using iptables proxy"
	I1202 21:09:27.097627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 21:09:27.197917       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 21:09:27.197955       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 21:09:27.198040       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 21:09:27.249572       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 21:09:27.249632       1 server_linux.go:132] "Using iptables Proxier"
	I1202 21:09:27.257633       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 21:09:27.260407       1 server.go:527] "Version info" version="v1.34.2"
	I1202 21:09:27.260430       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 21:09:27.269791       1 config.go:106] "Starting endpoint slice config controller"
	I1202 21:09:27.269813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 21:09:27.270105       1 config.go:200] "Starting service config controller"
	I1202 21:09:27.270112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 21:09:27.270411       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 21:09:27.270421       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 21:09:27.270812       1 config.go:309] "Starting node config controller"
	I1202 21:09:27.270819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 21:09:27.270825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 21:09:27.370331       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 21:09:27.370404       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 21:09:27.370579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d] <==
	E1202 21:09:18.091535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 21:09:18.091534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 21:09:18.091631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 21:09:18.091635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 21:09:18.091676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 21:09:18.091792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 21:09:18.093815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 21:09:18.093935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 21:09:18.913935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 21:09:18.938717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 21:09:18.982354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 21:09:19.018351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 21:09:19.098295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 21:09:19.151613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 21:09:19.194164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 21:09:19.203807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 21:09:19.209169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 21:09:19.221514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 21:09:19.228265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 21:09:19.247712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 21:09:19.248865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 21:09:19.256147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 21:09:19.299542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 21:09:19.641388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1202 21:09:21.761268       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 21:13:33 addons-656754 kubelet[1256]: I1202 21:13:33.804887    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-gbhfb" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:17 addons-656754 kubelet[1256]: I1202 21:14:17.805495    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bgqc9" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:19 addons-656754 kubelet[1256]: I1202 21:14:19.152017    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bgqc9" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:19 addons-656754 kubelet[1256]: I1202 21:14:19.152068    1256 scope.go:117] "RemoveContainer" containerID="5f560e2af4c19c33383ec23395b902871fd3d08d7cc4286524c9dd792f2857c0"
	Dec 02 21:14:19 addons-656754 kubelet[1256]: I1202 21:14:19.177577    1256 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=114.438820777 podStartE2EDuration="1m56.177558794s" podCreationTimestamp="2025-12-02 21:12:23 +0000 UTC" firstStartedPulling="2025-12-02 21:12:23.598378185 +0000 UTC m=+182.940487146" lastFinishedPulling="2025-12-02 21:12:25.337116194 +0000 UTC m=+184.679225163" observedRunningTime="2025-12-02 21:12:25.76855088 +0000 UTC m=+185.110659849" watchObservedRunningTime="2025-12-02 21:14:19.177558794 +0000 UTC m=+298.519667755"
	Dec 02 21:14:20 addons-656754 kubelet[1256]: I1202 21:14:20.158383    1256 scope.go:117] "RemoveContainer" containerID="5f560e2af4c19c33383ec23395b902871fd3d08d7cc4286524c9dd792f2857c0"
	Dec 02 21:14:20 addons-656754 kubelet[1256]: I1202 21:14:20.158689    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bgqc9" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:20 addons-656754 kubelet[1256]: I1202 21:14:20.158726    1256 scope.go:117] "RemoveContainer" containerID="4cbc27327c4d173ab9b7fbdd071af4609ace7ef7dd7643dbd6f485e417ba29a8"
	Dec 02 21:14:20 addons-656754 kubelet[1256]: E1202 21:14:20.158882    1256 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bgqc9_kube-system(b9193e40-6002-48d5-8fce-7e6beaee342f)\"" pod="kube-system/registry-creds-764b6fb674-bgqc9" podUID="b9193e40-6002-48d5-8fce-7e6beaee342f"
	Dec 02 21:14:21 addons-656754 kubelet[1256]: I1202 21:14:21.163938    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bgqc9" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:21 addons-656754 kubelet[1256]: I1202 21:14:21.171285    1256 scope.go:117] "RemoveContainer" containerID="4cbc27327c4d173ab9b7fbdd071af4609ace7ef7dd7643dbd6f485e417ba29a8"
	Dec 02 21:14:21 addons-656754 kubelet[1256]: E1202 21:14:21.171922    1256 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bgqc9_kube-system(b9193e40-6002-48d5-8fce-7e6beaee342f)\"" pod="kube-system/registry-creds-764b6fb674-bgqc9" podUID="b9193e40-6002-48d5-8fce-7e6beaee342f"
	Dec 02 21:14:33 addons-656754 kubelet[1256]: I1202 21:14:33.805212    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bgqc9" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:33 addons-656754 kubelet[1256]: I1202 21:14:33.805293    1256 scope.go:117] "RemoveContainer" containerID="4cbc27327c4d173ab9b7fbdd071af4609ace7ef7dd7643dbd6f485e417ba29a8"
	Dec 02 21:14:34 addons-656754 kubelet[1256]: I1202 21:14:34.209972    1256 scope.go:117] "RemoveContainer" containerID="4cbc27327c4d173ab9b7fbdd071af4609ace7ef7dd7643dbd6f485e417ba29a8"
	Dec 02 21:14:34 addons-656754 kubelet[1256]: I1202 21:14:34.210174    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bgqc9" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:34 addons-656754 kubelet[1256]: I1202 21:14:34.210227    1256 scope.go:117] "RemoveContainer" containerID="9d64edb5134c406b8ab77b354da66cf64c1f87679dcf812aea4fc61724bc3111"
	Dec 02 21:14:34 addons-656754 kubelet[1256]: E1202 21:14:34.210386    1256 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bgqc9_kube-system(b9193e40-6002-48d5-8fce-7e6beaee342f)\"" pod="kube-system/registry-creds-764b6fb674-bgqc9" podUID="b9193e40-6002-48d5-8fce-7e6beaee342f"
	Dec 02 21:14:34 addons-656754 kubelet[1256]: I1202 21:14:34.804923    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gmn2x" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:40 addons-656754 kubelet[1256]: I1202 21:14:40.807552    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2zlcv" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:14:42 addons-656754 kubelet[1256]: I1202 21:14:42.068649    1256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85jp\" (UniqueName: \"kubernetes.io/projected/3a8b0f27-1f8c-4b01-ad22-81c7426e0346-kube-api-access-p85jp\") pod \"hello-world-app-5d498dc89-kc668\" (UID: \"3a8b0f27-1f8c-4b01-ad22-81c7426e0346\") " pod="default/hello-world-app-5d498dc89-kc668"
	Dec 02 21:14:42 addons-656754 kubelet[1256]: I1202 21:14:42.069352    1256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3a8b0f27-1f8c-4b01-ad22-81c7426e0346-gcp-creds\") pod \"hello-world-app-5d498dc89-kc668\" (UID: \"3a8b0f27-1f8c-4b01-ad22-81c7426e0346\") " pod="default/hello-world-app-5d498dc89-kc668"
	Dec 02 21:14:42 addons-656754 kubelet[1256]: W1202 21:14:42.359841    1256 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/crio-09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273 WatchSource:0}: Error finding container 09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273: Status 404 returned error can't find the container with id 09e045e13e29850ed9c7750e122aec3ad874e1be88bec17a610b0a455fffa273
	Dec 02 21:14:43 addons-656754 kubelet[1256]: I1202 21:14:43.267384    1256 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-kc668" podStartSLOduration=1.465470865 podStartE2EDuration="2.267363607s" podCreationTimestamp="2025-12-02 21:14:41 +0000 UTC" firstStartedPulling="2025-12-02 21:14:42.363859277 +0000 UTC m=+321.705968246" lastFinishedPulling="2025-12-02 21:14:43.165752027 +0000 UTC m=+322.507860988" observedRunningTime="2025-12-02 21:14:43.262801645 +0000 UTC m=+322.604910606" watchObservedRunningTime="2025-12-02 21:14:43.267363607 +0000 UTC m=+322.609472568"
	Dec 02 21:14:43 addons-656754 kubelet[1256]: I1202 21:14:43.805180    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-gbhfb" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e] <==
	W1202 21:14:19.702389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:21.705816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:21.711562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:23.714359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:23.718491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:25.721913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:25.729012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:27.731505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:27.735849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:29.739693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:29.744191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:31.747557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:31.754443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:33.758479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:33.763085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:35.770779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:35.776541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:37.779405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:37.783902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:39.788106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:39.793502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:41.796850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:41.805516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:43.809498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:14:43.816031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-656754 -n addons-656754
helpers_test.go:269: (dbg) Run:  kubectl --context addons-656754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-656754 describe pod ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-656754 describe pod ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb: exit status 1 (123.947191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mt6ld" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2fnsb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-656754 describe pod ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (310.705194ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:14:45.318245  457812 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:14:45.319511  457812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:14:45.319567  457812 out.go:374] Setting ErrFile to fd 2...
	I1202 21:14:45.319588  457812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:14:45.319902  457812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:14:45.320258  457812 mustload.go:66] Loading cluster: addons-656754
	I1202 21:14:45.320714  457812 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:14:45.320755  457812 addons.go:622] checking whether the cluster is paused
	I1202 21:14:45.320906  457812 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:14:45.320938  457812 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:14:45.321554  457812 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:14:45.349616  457812 ssh_runner.go:195] Run: systemctl --version
	I1202 21:14:45.349667  457812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:14:45.384575  457812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:14:45.501660  457812 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:14:45.501749  457812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:14:45.537960  457812 cri.go:89] found id: "9d64edb5134c406b8ab77b354da66cf64c1f87679dcf812aea4fc61724bc3111"
	I1202 21:14:45.537979  457812 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:14:45.537984  457812 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:14:45.537992  457812 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:14:45.537996  457812 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:14:45.538000  457812 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:14:45.538003  457812 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:14:45.538006  457812 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:14:45.538009  457812 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:14:45.538015  457812 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:14:45.538019  457812 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:14:45.538022  457812 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:14:45.538025  457812 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:14:45.538028  457812 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:14:45.538032  457812 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:14:45.538036  457812 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:14:45.538039  457812 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:14:45.538043  457812 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:14:45.538046  457812 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:14:45.538049  457812 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:14:45.538053  457812 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:14:45.538056  457812 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:14:45.538059  457812 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:14:45.538062  457812 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:14:45.538065  457812 cri.go:89] found id: ""
	I1202 21:14:45.538119  457812 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:14:45.553920  457812 out.go:203] 
	W1202 21:14:45.556916  457812 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:14:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:14:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:14:45.556939  457812 out.go:285] * 
	* 
	W1202 21:14:45.562549  457812 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:14:45.565441  457812 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable ingress --alsologtostderr -v=1: exit status 11 (264.93795ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:14:45.625572  457924 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:14:45.626740  457924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:14:45.626757  457924 out.go:374] Setting ErrFile to fd 2...
	I1202 21:14:45.626763  457924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:14:45.627166  457924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:14:45.627488  457924 mustload.go:66] Loading cluster: addons-656754
	I1202 21:14:45.627876  457924 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:14:45.627895  457924 addons.go:622] checking whether the cluster is paused
	I1202 21:14:45.628001  457924 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:14:45.628021  457924 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:14:45.628561  457924 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:14:45.646162  457924 ssh_runner.go:195] Run: systemctl --version
	I1202 21:14:45.646230  457924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:14:45.663758  457924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:14:45.769847  457924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:14:45.769929  457924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:14:45.800471  457924 cri.go:89] found id: "9d64edb5134c406b8ab77b354da66cf64c1f87679dcf812aea4fc61724bc3111"
	I1202 21:14:45.800498  457924 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:14:45.800503  457924 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:14:45.800516  457924 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:14:45.800521  457924 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:14:45.800524  457924 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:14:45.800528  457924 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:14:45.800532  457924 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:14:45.800535  457924 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:14:45.800542  457924 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:14:45.800545  457924 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:14:45.800548  457924 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:14:45.800551  457924 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:14:45.800554  457924 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:14:45.800557  457924 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:14:45.800563  457924 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:14:45.800566  457924 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:14:45.800572  457924 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:14:45.800576  457924 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:14:45.800579  457924 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:14:45.800584  457924 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:14:45.800591  457924 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:14:45.800594  457924 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:14:45.800597  457924 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:14:45.800603  457924 cri.go:89] found id: ""
	I1202 21:14:45.800657  457924 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:14:45.817528  457924 out.go:203] 
	W1202 21:14:45.820467  457924 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:14:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:14:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:14:45.820503  457924 out.go:285] * 
	* 
	W1202 21:14:45.826338  457924 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:14:45.830469  457924 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qk5vw" [e8461ca0-71ba-4990-a826-1cd53d4777b4] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007624699s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (314.97674ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:20.996144  456033 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:20.996836  456033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:20.996849  456033 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:20.996854  456033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:20.997115  456033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:20.997390  456033 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:20.997782  456033 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:20.997800  456033 addons.go:622] checking whether the cluster is paused
	I1202 21:12:20.997908  456033 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:20.997921  456033 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:20.998426  456033 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:21.020873  456033 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:21.020928  456033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:21.041134  456033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:21.161848  456033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:21.161934  456033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:21.193956  456033 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:21.193987  456033 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:21.193993  456033 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:21.193997  456033 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:21.194001  456033 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:21.194005  456033 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:21.194008  456033 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:21.194011  456033 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:21.194015  456033 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:21.194022  456033 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:21.194026  456033 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:21.194030  456033 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:21.194034  456033 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:21.194037  456033 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:21.194040  456033 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:21.194049  456033 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:21.194057  456033 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:21.194062  456033 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:21.194066  456033 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:21.194069  456033 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:21.194074  456033 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:21.194078  456033 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:21.194081  456033 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:21.194084  456033 cri.go:89] found id: ""
	I1202 21:12:21.194136  456033 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:21.209571  456033 out.go:203] 
	W1202 21:12:21.212548  456033 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:21.212583  456033 out.go:285] * 
	* 
	W1202 21:12:21.218131  456033 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:21.221255  456033 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.486171ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004086545s
addons_test.go:463: (dbg) Run:  kubectl --context addons-656754 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (310.594221ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:26.423023  456450 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:26.423769  456450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:26.423782  456450 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:26.423786  456450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:26.424078  456450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:26.424385  456450 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:26.424766  456450 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:26.424822  456450 addons.go:622] checking whether the cluster is paused
	I1202 21:12:26.424963  456450 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:26.424977  456450 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:26.425517  456450 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:26.444555  456450 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:26.444619  456450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:26.466707  456450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:26.570279  456450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:26.570376  456450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:26.611695  456450 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:26.611716  456450 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:26.611722  456450 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:26.611735  456450 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:26.611739  456450 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:26.611742  456450 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:26.611745  456450 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:26.611748  456450 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:26.611751  456450 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:26.611757  456450 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:26.611761  456450 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:26.611764  456450 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:26.611768  456450 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:26.611776  456450 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:26.611779  456450 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:26.611784  456450 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:26.611787  456450 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:26.611791  456450 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:26.611794  456450 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:26.611797  456450 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:26.611802  456450 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:26.611812  456450 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:26.611815  456450 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:26.611818  456450 cri.go:89] found id: ""
	I1202 21:12:26.611876  456450 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:26.628889  456450 out.go:203] 
	W1202 21:12:26.631840  456450 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:26.631872  456450 out.go:285] * 
	* 
	W1202 21:12:26.637481  456450 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:26.640347  456450 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 21:11:35.502674  447211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 21:11:35.506706  447211 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 21:11:35.506733  447211 kapi.go:107] duration metric: took 4.071275ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.080941ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-656754 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/12/02 21:11:44 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-656754 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [040d5f44-476b-42c5-82a6-9291342d8f5f] Pending
helpers_test.go:352: "task-pv-pod" [040d5f44-476b-42c5-82a6-9291342d8f5f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [040d5f44-476b-42c5-82a6-9291342d8f5f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003397708s
addons_test.go:572: (dbg) Run:  kubectl --context addons-656754 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-656754 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-656754 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-656754 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-656754 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-656754 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-656754 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5aa63b05-401e-4b7e-8f3e-cfe64abb8b17] Pending
helpers_test.go:352: "task-pv-pod-restore" [5aa63b05-401e-4b7e-8f3e-cfe64abb8b17] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5aa63b05-401e-4b7e-8f3e-cfe64abb8b17] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00287034s
addons_test.go:614: (dbg) Run:  kubectl --context addons-656754 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-656754 delete pod task-pv-pod-restore: (1.257823325s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-656754 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-656754 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (268.921197ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:22.217095  456104 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:22.217910  456104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:22.217954  456104 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:22.217979  456104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:22.218707  456104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:22.219226  456104 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:22.219958  456104 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:22.219979  456104 addons.go:622] checking whether the cluster is paused
	I1202 21:12:22.220163  456104 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:22.220182  456104 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:22.220958  456104 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:22.243629  456104 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:22.243688  456104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:22.261379  456104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:22.369800  456104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:22.369890  456104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:22.400005  456104 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:22.400029  456104 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:22.400033  456104 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:22.400037  456104 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:22.400040  456104 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:22.400044  456104 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:22.400047  456104 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:22.400050  456104 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:22.400053  456104 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:22.400079  456104 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:22.400093  456104 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:22.400098  456104 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:22.400107  456104 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:22.400111  456104 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:22.400114  456104 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:22.400120  456104 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:22.400126  456104 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:22.400131  456104 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:22.400135  456104 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:22.400137  456104 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:22.400155  456104 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:22.400163  456104 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:22.400168  456104 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:22.400183  456104 cri.go:89] found id: ""
	I1202 21:12:22.400258  456104 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:22.415556  456104 out.go:203] 
	W1202 21:12:22.418501  456104 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:22.418530  456104 out.go:285] * 
	* 
	W1202 21:12:22.424138  456104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:22.427052  456104 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (277.228343ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:22.489689  456150 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:22.490424  456150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:22.490443  456150 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:22.490449  456150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:22.490759  456150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:22.491158  456150 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:22.491615  456150 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:22.491637  456150 addons.go:622] checking whether the cluster is paused
	I1202 21:12:22.491791  456150 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:22.491809  456150 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:22.492381  456150 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:22.513764  456150 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:22.513821  456150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:22.532506  456150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:22.641903  456150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:22.641988  456150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:22.678220  456150 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:22.678238  456150 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:22.678243  456150 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:22.678247  456150 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:22.678250  456150 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:22.678254  456150 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:22.678257  456150 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:22.678260  456150 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:22.678263  456150 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:22.678269  456150 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:22.678273  456150 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:22.678276  456150 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:22.678279  456150 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:22.678282  456150 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:22.678286  456150 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:22.678293  456150 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:22.678297  456150 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:22.678304  456150 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:22.678307  456150 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:22.678335  456150 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:22.678341  456150 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:22.678344  456150 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:22.678348  456150 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:22.678351  456150 cri.go:89] found id: ""
	I1202 21:12:22.678403  456150 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:22.693580  456150 out.go:203] 
	W1202 21:12:22.696504  456150 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:22.696527  456150 out.go:285] * 
	* 
	W1202 21:12:22.702064  456150 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:22.705139  456150 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-656754 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-656754 --alsologtostderr -v=1: exit status 11 (302.411626ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:11.405195  455404 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:11.405954  455404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:11.405993  455404 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:11.406016  455404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:11.406318  455404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:11.406632  455404 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:11.407101  455404 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:11.407132  455404 addons.go:622] checking whether the cluster is paused
	I1202 21:12:11.407260  455404 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:11.407284  455404 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:11.407832  455404 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:11.434754  455404 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:11.434819  455404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:11.466667  455404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:11.574175  455404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:11.574311  455404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:11.605468  455404 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:11.605504  455404 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:11.605510  455404 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:11.605514  455404 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:11.605517  455404 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:11.605521  455404 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:11.605524  455404 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:11.605527  455404 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:11.605530  455404 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:11.605536  455404 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:11.605539  455404 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:11.605543  455404 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:11.605547  455404 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:11.605558  455404 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:11.605564  455404 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:11.605570  455404 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:11.605573  455404 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:11.605578  455404 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:11.605581  455404 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:11.605584  455404 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:11.605589  455404 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:11.605592  455404 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:11.605595  455404 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:11.605598  455404 cri.go:89] found id: ""
	I1202 21:12:11.605657  455404 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:11.621771  455404 out.go:203] 
	W1202 21:12:11.624842  455404 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:11.624874  455404 out.go:285] * 
	* 
	W1202 21:12:11.630690  455404 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:11.633675  455404 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-656754 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-656754
helpers_test.go:243: (dbg) docker inspect addons-656754:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036",
	        "Created": "2025-12-02T21:08:59.231811527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448603,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:08:59.296791297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/hostname",
	        "HostsPath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/hosts",
	        "LogPath": "/var/lib/docker/containers/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036-json.log",
	        "Name": "/addons-656754",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-656754:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-656754",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036",
	                "LowerDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0bb97ad637cd5eca01a92a328479055d47346272cce4fc3d97958def6365ada/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-656754",
	                "Source": "/var/lib/docker/volumes/addons-656754/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-656754",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-656754",
	                "name.minikube.sigs.k8s.io": "addons-656754",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9edec1bfac1be8f8d951bd8d9f55267a5f117dbd28895252fdd0ac72ca0282e",
	            "SandboxKey": "/var/run/docker/netns/c9edec1bfac1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-656754": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:58:d4:c4:78:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99d20b9fca2e0f43d68a83eb1455218fde6d1486f2da0b1dcae3ebb9594c9f46",
	                    "EndpointID": "01b0167302bc7a99382dc25c557580a0a0d8b63c67c67e397ade2e5624404f71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-656754",
	                        "efe0c78f1497"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-656754 -n addons-656754
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-656754 logs -n 25: (1.754028557s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-304980                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-304980   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ -o=json --download-only -p download-only-215360 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-215360   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-215360                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-215360   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-227195                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-227195   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-304980                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-304980   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-215360                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-215360   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ --download-only -p download-docker-798204 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-798204 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ -p download-docker-798204                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-798204 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ --download-only -p binary-mirror-045307 --alsologtostderr --binary-mirror http://127.0.0.1:40293 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045307   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ -p binary-mirror-045307                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-045307   │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ addons  │ enable dashboard -p addons-656754                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-656754                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ start   │ -p addons-656754 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:11 UTC │
	│ addons  │ addons-656754 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ addons  │ addons-656754 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ addons  │ addons-656754 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ ip      │ addons-656754 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │ 02 Dec 25 21:11 UTC │
	│ addons  │ addons-656754 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ addons  │ addons-656754 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:11 UTC │                     │
	│ ssh     │ addons-656754 ssh cat /opt/local-path-provisioner/pvc-3df1e97b-8903-4317-b848-7da6166c304a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │ 02 Dec 25 21:12 UTC │
	│ addons  │ addons-656754 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ addons-656754 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-656754 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-656754          │ jenkins │ v1.37.0 │ 02 Dec 25 21:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:08:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:08:53.266224  448211 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:08:53.266434  448211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:53.266461  448211 out.go:374] Setting ErrFile to fd 2...
	I1202 21:08:53.266483  448211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:53.267078  448211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:08:53.267555  448211 out.go:368] Setting JSON to false
	I1202 21:08:53.268387  448211 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10262,"bootTime":1764699472,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:08:53.268457  448211 start.go:143] virtualization:  
	I1202 21:08:53.271683  448211 out.go:179] * [addons-656754] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:08:53.275598  448211 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:08:53.275740  448211 notify.go:221] Checking for updates...
	I1202 21:08:53.281520  448211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:08:53.284498  448211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:08:53.287365  448211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:08:53.290198  448211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:08:53.293096  448211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:08:53.296256  448211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:08:53.324844  448211 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:08:53.324964  448211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:53.386135  448211 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:08:53.37684628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:53.386244  448211 docker.go:319] overlay module found
	I1202 21:08:53.390919  448211 out.go:179] * Using the docker driver based on user configuration
	I1202 21:08:53.393696  448211 start.go:309] selected driver: docker
	I1202 21:08:53.393715  448211 start.go:927] validating driver "docker" against <nil>
	I1202 21:08:53.393728  448211 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:08:53.394454  448211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:53.447649  448211 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:08:53.438130105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:53.447802  448211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:08:53.448068  448211 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:08:53.451056  448211 out.go:179] * Using Docker driver with root privileges
	I1202 21:08:53.453864  448211 cni.go:84] Creating CNI manager for ""
	I1202 21:08:53.453934  448211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:08:53.453948  448211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 21:08:53.454025  448211 start.go:353] cluster config:
	{Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 21:08:53.457104  448211 out.go:179] * Starting "addons-656754" primary control-plane node in "addons-656754" cluster
	I1202 21:08:53.459832  448211 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:08:53.462701  448211 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:08:53.465618  448211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:08:53.465663  448211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 21:08:53.465676  448211 cache.go:65] Caching tarball of preloaded images
	I1202 21:08:53.465687  448211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:08:53.465759  448211 preload.go:238] Found /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 21:08:53.465769  448211 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 21:08:53.466097  448211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/config.json ...
	I1202 21:08:53.466117  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/config.json: {Name:mka7b54be10a861bfb995eaef2daf2bf1910d7e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:08:53.484454  448211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:08:53.484475  448211 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 21:08:53.484494  448211 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:08:53.484555  448211 start.go:360] acquireMachinesLock for addons-656754: {Name:mk3a37f4628ff59aab4458c86531034220273f2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:08:53.484657  448211 start.go:364] duration metric: took 80.887µs to acquireMachinesLock for "addons-656754"
	I1202 21:08:53.484691  448211 start.go:93] Provisioning new machine with config: &{Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:08:53.484759  448211 start.go:125] createHost starting for "" (driver="docker")
	I1202 21:08:53.488112  448211 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 21:08:53.488338  448211 start.go:159] libmachine.API.Create for "addons-656754" (driver="docker")
	I1202 21:08:53.488367  448211 client.go:173] LocalClient.Create starting
	I1202 21:08:53.488477  448211 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem
	I1202 21:08:53.722163  448211 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem
	I1202 21:08:53.837794  448211 cli_runner.go:164] Run: docker network inspect addons-656754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 21:08:53.853895  448211 cli_runner.go:211] docker network inspect addons-656754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 21:08:53.853991  448211 network_create.go:284] running [docker network inspect addons-656754] to gather additional debugging logs...
	I1202 21:08:53.854011  448211 cli_runner.go:164] Run: docker network inspect addons-656754
	W1202 21:08:53.870292  448211 cli_runner.go:211] docker network inspect addons-656754 returned with exit code 1
	I1202 21:08:53.870322  448211 network_create.go:287] error running [docker network inspect addons-656754]: docker network inspect addons-656754: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-656754 not found
	I1202 21:08:53.870335  448211 network_create.go:289] output of [docker network inspect addons-656754]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-656754 not found
	
	** /stderr **
	I1202 21:08:53.870435  448211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:08:53.886760  448211 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b1550}
	I1202 21:08:53.886800  448211 network_create.go:124] attempt to create docker network addons-656754 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 21:08:53.886855  448211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-656754 addons-656754
	I1202 21:08:53.944633  448211 network_create.go:108] docker network addons-656754 192.168.49.0/24 created
	I1202 21:08:53.944662  448211 kic.go:121] calculated static IP "192.168.49.2" for the "addons-656754" container
	I1202 21:08:53.944750  448211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 21:08:53.960453  448211 cli_runner.go:164] Run: docker volume create addons-656754 --label name.minikube.sigs.k8s.io=addons-656754 --label created_by.minikube.sigs.k8s.io=true
	I1202 21:08:53.978468  448211 oci.go:103] Successfully created a docker volume addons-656754
	I1202 21:08:53.978554  448211 cli_runner.go:164] Run: docker run --rm --name addons-656754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-656754 --entrypoint /usr/bin/test -v addons-656754:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 21:08:55.170613  448211 cli_runner.go:217] Completed: docker run --rm --name addons-656754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-656754 --entrypoint /usr/bin/test -v addons-656754:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.192018358s)
	I1202 21:08:55.170643  448211 oci.go:107] Successfully prepared a docker volume addons-656754
	I1202 21:08:55.170694  448211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:08:55.170709  448211 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 21:08:55.170783  448211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-656754:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 21:08:59.165951  448211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-656754:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.995125032s)
	I1202 21:08:59.165981  448211 kic.go:203] duration metric: took 3.995268089s to extract preloaded images to volume ...
	W1202 21:08:59.166125  448211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 21:08:59.166225  448211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 21:08:59.217551  448211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-656754 --name addons-656754 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-656754 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-656754 --network addons-656754 --ip 192.168.49.2 --volume addons-656754:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 21:08:59.505717  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Running}}
	I1202 21:08:59.531161  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:08:59.567199  448211 cli_runner.go:164] Run: docker exec addons-656754 stat /var/lib/dpkg/alternatives/iptables
	I1202 21:08:59.626088  448211 oci.go:144] the created container "addons-656754" has a running status.
	I1202 21:08:59.626116  448211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa...
	I1202 21:09:00.328370  448211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 21:09:00.364489  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:00.397099  448211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 21:09:00.397136  448211 kic_runner.go:114] Args: [docker exec --privileged addons-656754 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 21:09:00.465850  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:00.488381  448211 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:00.488497  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:00.509221  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:00.509608  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:00.509627  448211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:00.675150  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-656754
	
	I1202 21:09:00.675177  448211 ubuntu.go:182] provisioning hostname "addons-656754"
	I1202 21:09:00.675251  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:00.698942  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:00.699292  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:00.699317  448211 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-656754 && echo "addons-656754" | sudo tee /etc/hostname
	I1202 21:09:00.868764  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-656754
	
	I1202 21:09:00.868906  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:00.886124  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:00.886446  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:00.886462  448211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-656754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-656754/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-656754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:01.039485  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:01.039531  448211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:09:01.039552  448211 ubuntu.go:190] setting up certificates
	I1202 21:09:01.039566  448211 provision.go:84] configureAuth start
	I1202 21:09:01.039638  448211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-656754
	I1202 21:09:01.058348  448211 provision.go:143] copyHostCerts
	I1202 21:09:01.058435  448211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:09:01.058571  448211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:09:01.058647  448211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:09:01.058709  448211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.addons-656754 san=[127.0.0.1 192.168.49.2 addons-656754 localhost minikube]
	I1202 21:09:01.260946  448211 provision.go:177] copyRemoteCerts
	I1202 21:09:01.261073  448211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:01.261117  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:01.279268  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:01.383248  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:09:01.401849  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 21:09:01.420992  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:01.439469  448211 provision.go:87] duration metric: took 399.879692ms to configureAuth
	I1202 21:09:01.439542  448211 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:01.439771  448211 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:09:01.439893  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:01.457531  448211 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:01.457852  448211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1202 21:09:01.457873  448211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:09:01.971440  448211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:09:01.971461  448211 machine.go:97] duration metric: took 1.48305281s to provisionDockerMachine
	I1202 21:09:01.971472  448211 client.go:176] duration metric: took 8.483099172s to LocalClient.Create
	I1202 21:09:01.971483  448211 start.go:167] duration metric: took 8.483147707s to libmachine.API.Create "addons-656754"
	I1202 21:09:01.971490  448211 start.go:293] postStartSetup for "addons-656754" (driver="docker")
	I1202 21:09:01.971500  448211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:01.971561  448211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:01.971599  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:01.990558  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.096212  448211 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:02.099721  448211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:02.099748  448211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:02.099760  448211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:09:02.099833  448211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:09:02.099854  448211 start.go:296] duration metric: took 128.35855ms for postStartSetup
	I1202 21:09:02.100202  448211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-656754
	I1202 21:09:02.119624  448211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/config.json ...
	I1202 21:09:02.119946  448211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:02.119998  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:02.139107  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.240356  448211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:02.244966  448211 start.go:128] duration metric: took 8.760184934s to createHost
	I1202 21:09:02.245044  448211 start.go:83] releasing machines lock for "addons-656754", held for 8.760370586s
	I1202 21:09:02.245137  448211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-656754
	I1202 21:09:02.262447  448211 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:02.262545  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:02.262809  448211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:02.262862  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:02.284303  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.296507  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:02.481807  448211 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:02.488179  448211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:09:02.538141  448211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:09:02.542549  448211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:02.542629  448211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:02.573398  448211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 21:09:02.573425  448211 start.go:496] detecting cgroup driver to use...
	I1202 21:09:02.573460  448211 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:02.573515  448211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:09:02.592850  448211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:09:02.605566  448211 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:02.605678  448211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:02.624011  448211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:02.642921  448211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:02.771477  448211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:02.894856  448211 docker.go:234] disabling docker service ...
	I1202 21:09:02.894964  448211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:02.916596  448211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:02.930037  448211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:03.058442  448211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:03.187713  448211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:03.200186  448211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:03.213557  448211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:09:03.213625  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.222603  448211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:09:03.222675  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.231643  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.240257  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.248938  448211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:03.257532  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.266305  448211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.279725  448211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:09:03.288468  448211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:03.295721  448211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:03.302783  448211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:03.421682  448211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:09:03.610102  448211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:09:03.610187  448211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:09:03.613933  448211 start.go:564] Will wait 60s for crictl version
	I1202 21:09:03.614000  448211 ssh_runner.go:195] Run: which crictl
	I1202 21:09:03.617306  448211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:03.652589  448211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:09:03.652773  448211 ssh_runner.go:195] Run: crio --version
	I1202 21:09:03.683240  448211 ssh_runner.go:195] Run: crio --version
	I1202 21:09:03.719005  448211 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 21:09:03.721942  448211 cli_runner.go:164] Run: docker network inspect addons-656754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:03.738544  448211 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:03.742661  448211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:09:03.753173  448211 kubeadm.go:884] updating cluster {Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:03.753302  448211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:09:03.753364  448211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:03.797799  448211 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:09:03.797824  448211 crio.go:433] Images already preloaded, skipping extraction
	I1202 21:09:03.797889  448211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:03.823645  448211 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:09:03.823670  448211 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:03.823679  448211 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 21:09:03.823821  448211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-656754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:03.823912  448211 ssh_runner.go:195] Run: crio config
	I1202 21:09:03.888535  448211 cni.go:84] Creating CNI manager for ""
	I1202 21:09:03.888558  448211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:09:03.888575  448211 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:03.888598  448211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-656754 NodeName:addons-656754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:03.888730  448211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-656754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:03.888808  448211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 21:09:03.896420  448211 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:03.896498  448211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:03.904075  448211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 21:09:03.916869  448211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 21:09:03.929345  448211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1202 21:09:03.941919  448211 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:03.945617  448211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:09:03.955700  448211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:04.098960  448211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:04.116114  448211 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754 for IP: 192.168.49.2
	I1202 21:09:04.116178  448211 certs.go:195] generating shared ca certs ...
	I1202 21:09:04.116208  448211 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:04.116388  448211 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:09:04.298499  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt ...
	I1202 21:09:04.298532  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt: {Name:mkb7268e5d2cf4e490ec2757b1e751cce88ddc08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:04.298760  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key ...
	I1202 21:09:04.298771  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key: {Name:mkdde83518864eb9b1cff6e81c6693452a945a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:04.298852  448211 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:09:05.180432  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt ...
	I1202 21:09:05.180466  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt: {Name:mke28285c3a28f9ad2afd40d9b0e756b7a14c822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.180663  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key ...
	I1202 21:09:05.180676  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key: {Name:mkd75d5c61a930c130a6a239e8592d110d7f3480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.180758  448211 certs.go:257] generating profile certs ...
	I1202 21:09:05.180821  448211 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.key
	I1202 21:09:05.180838  448211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt with IP's: []
	I1202 21:09:05.230623  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt ...
	I1202 21:09:05.230648  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: {Name:mk0f70759a7c70fb2a447382a2388f55fc38c755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.230824  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.key ...
	I1202 21:09:05.230838  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.key: {Name:mkd5ace339531361dfdc33e0f946bf26b87c6257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.230949  448211 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521
	I1202 21:09:05.230972  448211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 21:09:05.512469  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521 ...
	I1202 21:09:05.512499  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521: {Name:mk4c8cd0b801465a2237024ca94662ba57997484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.512678  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521 ...
	I1202 21:09:05.512693  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521: {Name:mke22ae8582c1300fd8908bc71c19cd6e64f6576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.512774  448211 certs.go:382] copying /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt.c731d521 -> /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt
	I1202 21:09:05.512852  448211 certs.go:386] copying /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key.c731d521 -> /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key
	I1202 21:09:05.512905  448211 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key
	I1202 21:09:05.512925  448211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt with IP's: []
	I1202 21:09:05.871101  448211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt ...
	I1202 21:09:05.871133  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt: {Name:mka961017afb64a240b7bdf35c1f056407603063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.871315  448211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key ...
	I1202 21:09:05.871329  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key: {Name:mk736075cf81bf75740f699e84f8edbb27af1c62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:05.871514  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:05.871559  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:09:05.871589  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:05.871620  448211 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:05.872184  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:05.891801  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:05.910820  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:05.929216  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:09:05.946956  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 21:09:05.965801  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 21:09:05.984076  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:06.002512  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:09:06.027565  448211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:06.047684  448211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:06.062495  448211 ssh_runner.go:195] Run: openssl version
	I1202 21:09:06.069246  448211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:06.078339  448211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:06.082432  448211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:06.082522  448211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:06.124250  448211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:06.133155  448211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:06.136937  448211 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 21:09:06.136988  448211 kubeadm.go:401] StartCluster: {Name:addons-656754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-656754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.137072  448211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:06.137139  448211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:06.167415  448211 cri.go:89] found id: ""
	I1202 21:09:06.167489  448211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:06.175532  448211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:09:06.183575  448211 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:09:06.183663  448211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:09:06.191890  448211 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:09:06.191911  448211 kubeadm.go:158] found existing configuration files:
	
	I1202 21:09:06.191965  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 21:09:06.199779  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:09:06.199867  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:09:06.207332  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 21:09:06.215111  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:09:06.215180  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:09:06.222888  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 21:09:06.230977  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:09:06.231081  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:09:06.238621  448211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 21:09:06.246390  448211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:09:06.246483  448211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:09:06.253820  448211 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:09:06.293541  448211 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 21:09:06.293604  448211 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:09:06.318494  448211 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:09:06.318571  448211 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:09:06.318612  448211 kubeadm.go:319] OS: Linux
	I1202 21:09:06.318662  448211 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:09:06.318714  448211 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:09:06.318765  448211 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:09:06.318816  448211 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:09:06.318866  448211 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:09:06.318918  448211 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:09:06.318968  448211 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:09:06.319035  448211 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:09:06.319087  448211 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:09:06.395889  448211 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:09:06.396004  448211 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:09:06.396125  448211 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:09:06.406396  448211 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:09:06.413232  448211 out.go:252]   - Generating certificates and keys ...
	I1202 21:09:06.413331  448211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:09:06.413404  448211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:09:06.617290  448211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 21:09:07.128480  448211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 21:09:07.324242  448211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 21:09:07.670014  448211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 21:09:08.370628  448211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 21:09:08.370971  448211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-656754 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:09:08.470089  448211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 21:09:08.470535  448211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-656754 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:09:08.776289  448211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 21:09:09.028556  448211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 21:09:09.195953  448211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 21:09:09.196200  448211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:09:09.843871  448211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:09:10.317059  448211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:09:10.645220  448211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:09:10.760418  448211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:09:11.321988  448211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:09:11.322839  448211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:09:11.325787  448211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:09:11.331066  448211 out.go:252]   - Booting up control plane ...
	I1202 21:09:11.331177  448211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:09:11.331269  448211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:09:11.331344  448211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:09:11.345904  448211 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:09:11.346186  448211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:09:11.354727  448211 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:09:11.354831  448211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:09:11.354878  448211 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:09:11.491535  448211 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:09:11.491661  448211 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:09:13.489284  448211 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001018408s
	I1202 21:09:13.493007  448211 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 21:09:13.493111  448211 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 21:09:13.493221  448211 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 21:09:13.493332  448211 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 21:09:16.854768  448211 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.36105387s
	I1202 21:09:18.080127  448211 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.587080452s
	I1202 21:09:19.994798  448211 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501659107s
	I1202 21:09:20.039672  448211 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 21:09:20.058371  448211 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 21:09:20.075296  448211 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 21:09:20.075558  448211 kubeadm.go:319] [mark-control-plane] Marking the node addons-656754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 21:09:20.092888  448211 kubeadm.go:319] [bootstrap-token] Using token: s833ce.4fiprx753etcuhgl
	I1202 21:09:20.095752  448211 out.go:252]   - Configuring RBAC rules ...
	I1202 21:09:20.095884  448211 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 21:09:20.103046  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 21:09:20.116377  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 21:09:20.122187  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 21:09:20.128505  448211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 21:09:20.133106  448211 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 21:09:20.402878  448211 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 21:09:20.848860  448211 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 21:09:21.402130  448211 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 21:09:21.403324  448211 kubeadm.go:319] 
	I1202 21:09:21.403405  448211 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 21:09:21.403415  448211 kubeadm.go:319] 
	I1202 21:09:21.403492  448211 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 21:09:21.403501  448211 kubeadm.go:319] 
	I1202 21:09:21.403526  448211 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 21:09:21.403588  448211 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 21:09:21.403644  448211 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 21:09:21.403653  448211 kubeadm.go:319] 
	I1202 21:09:21.403707  448211 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 21:09:21.403715  448211 kubeadm.go:319] 
	I1202 21:09:21.403762  448211 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 21:09:21.403768  448211 kubeadm.go:319] 
	I1202 21:09:21.403820  448211 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 21:09:21.403898  448211 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 21:09:21.403970  448211 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 21:09:21.403978  448211 kubeadm.go:319] 
	I1202 21:09:21.404080  448211 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 21:09:21.404160  448211 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 21:09:21.404166  448211 kubeadm.go:319] 
	I1202 21:09:21.404251  448211 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s833ce.4fiprx753etcuhgl \
	I1202 21:09:21.404357  448211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d4cda52a6893d5340cae35e7f1bec4a8a826aaefc3b1aeca8da4a9d2d90cc2f0 \
	I1202 21:09:21.404381  448211 kubeadm.go:319] 	--control-plane 
	I1202 21:09:21.404389  448211 kubeadm.go:319] 
	I1202 21:09:21.404474  448211 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 21:09:21.404481  448211 kubeadm.go:319] 
	I1202 21:09:21.404564  448211 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s833ce.4fiprx753etcuhgl \
	I1202 21:09:21.404675  448211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d4cda52a6893d5340cae35e7f1bec4a8a826aaefc3b1aeca8da4a9d2d90cc2f0 
	I1202 21:09:21.407353  448211 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1202 21:09:21.407580  448211 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:09:21.407689  448211 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:09:21.407709  448211 cni.go:84] Creating CNI manager for ""
	I1202 21:09:21.407717  448211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:09:21.412778  448211 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 21:09:21.415607  448211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 21:09:21.419590  448211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 21:09:21.419607  448211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 21:09:21.434204  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 21:09:21.750342  448211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 21:09:21.750502  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:21.750587  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-656754 minikube.k8s.io/updated_at=2025_12_02T21_09_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=addons-656754 minikube.k8s.io/primary=true
	I1202 21:09:21.929166  448211 ops.go:34] apiserver oom_adj: -16
	I1202 21:09:21.929273  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:22.430338  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:22.929392  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:23.430248  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:23.929946  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:24.429383  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:24.930075  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:25.429547  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:25.930067  448211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 21:09:26.041868  448211 kubeadm.go:1114] duration metric: took 4.29140643s to wait for elevateKubeSystemPrivileges
	I1202 21:09:26.041895  448211 kubeadm.go:403] duration metric: took 19.90491098s to StartCluster
	I1202 21:09:26.041913  448211 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:26.042032  448211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:09:26.042409  448211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:26.042611  448211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:09:26.042792  448211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 21:09:26.043069  448211 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:09:26.043103  448211 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 21:09:26.043180  448211 addons.go:70] Setting yakd=true in profile "addons-656754"
	I1202 21:09:26.043194  448211 addons.go:239] Setting addon yakd=true in "addons-656754"
	I1202 21:09:26.043216  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.043723  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.044292  448211 addons.go:70] Setting inspektor-gadget=true in profile "addons-656754"
	I1202 21:09:26.044323  448211 addons.go:239] Setting addon inspektor-gadget=true in "addons-656754"
	I1202 21:09:26.044350  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.044819  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.044961  448211 addons.go:70] Setting metrics-server=true in profile "addons-656754"
	I1202 21:09:26.045005  448211 addons.go:239] Setting addon metrics-server=true in "addons-656754"
	I1202 21:09:26.045034  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.045446  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.045924  448211 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-656754"
	I1202 21:09:26.045952  448211 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-656754"
	I1202 21:09:26.045992  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.046472  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.047617  448211 addons.go:70] Setting cloud-spanner=true in profile "addons-656754"
	I1202 21:09:26.047658  448211 addons.go:239] Setting addon cloud-spanner=true in "addons-656754"
	I1202 21:09:26.047697  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.048217  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.048381  448211 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-656754"
	I1202 21:09:26.048398  448211 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-656754"
	I1202 21:09:26.048421  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.048823  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.054557  448211 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-656754"
	I1202 21:09:26.054638  448211 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-656754"
	I1202 21:09:26.054675  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.055235  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.063737  448211 addons.go:70] Setting registry=true in profile "addons-656754"
	I1202 21:09:26.063819  448211 addons.go:239] Setting addon registry=true in "addons-656754"
	I1202 21:09:26.063889  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.064418  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.067208  448211 addons.go:70] Setting default-storageclass=true in profile "addons-656754"
	I1202 21:09:26.067263  448211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-656754"
	I1202 21:09:26.067769  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.082288  448211 addons.go:70] Setting registry-creds=true in profile "addons-656754"
	I1202 21:09:26.082323  448211 addons.go:239] Setting addon registry-creds=true in "addons-656754"
	I1202 21:09:26.082365  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.082844  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.087065  448211 addons.go:70] Setting gcp-auth=true in profile "addons-656754"
	I1202 21:09:26.087107  448211 mustload.go:66] Loading cluster: addons-656754
	I1202 21:09:26.087441  448211 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:09:26.087694  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.097006  448211 addons.go:70] Setting storage-provisioner=true in profile "addons-656754"
	I1202 21:09:26.097040  448211 addons.go:239] Setting addon storage-provisioner=true in "addons-656754"
	I1202 21:09:26.097080  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.097566  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.098434  448211 addons.go:70] Setting ingress=true in profile "addons-656754"
	I1202 21:09:26.098460  448211 addons.go:239] Setting addon ingress=true in "addons-656754"
	I1202 21:09:26.098507  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.098925  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.120724  448211 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-656754"
	I1202 21:09:26.120758  448211 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-656754"
	I1202 21:09:26.121099  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.122719  448211 addons.go:70] Setting ingress-dns=true in profile "addons-656754"
	I1202 21:09:26.122748  448211 addons.go:239] Setting addon ingress-dns=true in "addons-656754"
	I1202 21:09:26.122790  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.123361  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.147366  448211 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:26.148353  448211 addons.go:70] Setting volcano=true in profile "addons-656754"
	I1202 21:09:26.148393  448211 addons.go:239] Setting addon volcano=true in "addons-656754"
	I1202 21:09:26.148429  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.149619  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.151790  448211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:26.188853  448211 addons.go:70] Setting volumesnapshots=true in profile "addons-656754"
	I1202 21:09:26.188888  448211 addons.go:239] Setting addon volumesnapshots=true in "addons-656754"
	I1202 21:09:26.188922  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.189424  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.219511  448211 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 21:09:26.289505  448211 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 21:09:26.330753  448211 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 21:09:26.332507  448211 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 21:09:26.332536  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 21:09:26.332597  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.335596  448211 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 21:09:26.336218  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 21:09:26.336299  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.350924  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.357897  448211 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 21:09:26.358218  448211 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 21:09:26.361693  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 21:09:26.361719  448211 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 21:09:26.361788  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.362032  448211 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 21:09:26.362075  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 21:09:26.362158  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.381754  448211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:26.384723  448211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:26.384748  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:26.384814  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.388082  448211 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 21:09:26.390947  448211 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 21:09:26.393730  448211 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 21:09:26.393752  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 21:09:26.393820  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.401408  448211 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 21:09:26.404466  448211 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 21:09:26.404491  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 21:09:26.404563  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.405573  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 21:09:26.405590  448211 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 21:09:26.405653  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.421009  448211 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 21:09:26.424830  448211 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 21:09:26.424854  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 21:09:26.424933  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.433475  448211 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-656754"
	I1202 21:09:26.433520  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.433934  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	W1202 21:09:26.441837  448211 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 21:09:26.445221  448211 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 21:09:26.445436  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 21:09:26.455436  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 21:09:26.460711  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 21:09:26.460798  448211 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 21:09:26.460919  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.481418  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 21:09:26.481955  448211 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 21:09:26.481982  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 21:09:26.482047  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.485088  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 21:09:26.504452  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 21:09:26.515372  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 21:09:26.519839  448211 addons.go:239] Setting addon default-storageclass=true in "addons-656754"
	I1202 21:09:26.519882  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:26.520331  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:26.528595  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.549265  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 21:09:26.549680  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 21:09:26.550952  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.561577  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 21:09:26.567176  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 21:09:26.573913  448211 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 21:09:26.583118  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 21:09:26.583146  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 21:09:26.583217  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.587050  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 21:09:26.587258  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.598423  448211 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 21:09:26.598505  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 21:09:26.598630  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.642863  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.644879  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.659991  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.671303  448211 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 21:09:26.674256  448211 out.go:179]   - Using image docker.io/busybox:stable
	I1202 21:09:26.677023  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.677406  448211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 21:09:26.677421  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 21:09:26.677983  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.678348  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.685013  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.724777  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.727201  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.745943  448211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:26.745967  448211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:26.746028  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:26.784208  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.785663  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.791751  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:26.804057  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:27.053623  448211 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.010802597s)
	I1202 21:09:27.053636  448211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:27.053848  448211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 21:09:27.208181  448211 node_ready.go:35] waiting up to 6m0s for node "addons-656754" to be "Ready" ...
	I1202 21:09:27.326760  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 21:09:27.326785  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 21:09:27.345695  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 21:09:27.345726  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 21:09:27.359933  448211 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 21:09:27.359970  448211 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 21:09:27.370794  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 21:09:27.377137  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 21:09:27.377165  448211 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 21:09:27.406489  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 21:09:27.419043  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:27.483156  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 21:09:27.483199  448211 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 21:09:27.497638  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 21:09:27.497664  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 21:09:27.514286  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 21:09:27.546232  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 21:09:27.571950  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 21:09:27.571986  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 21:09:27.584233  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 21:09:27.584270  448211 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 21:09:27.587417  448211 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 21:09:27.587447  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 21:09:27.602622  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.605349  448211 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 21:09:27.605374  448211 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 21:09:27.605712  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 21:09:27.610499  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 21:09:27.630191  448211 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 21:09:27.630262  448211 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 21:09:27.632503  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 21:09:27.632566  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 21:09:27.649254  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 21:09:27.668493  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 21:09:27.680449  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 21:09:27.727469  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 21:09:27.745054  448211 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 21:09:27.745119  448211 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 21:09:27.789749  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 21:09:27.789825  448211 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 21:09:27.926584  448211 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 21:09:27.926656  448211 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 21:09:27.951425  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 21:09:27.951491  448211 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 21:09:27.985416  448211 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 21:09:27.985482  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 21:09:28.164889  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 21:09:28.166759  448211 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 21:09:28.166816  448211 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 21:09:28.169023  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 21:09:28.169084  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 21:09:28.309761  448211 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 21:09:28.309824  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 21:09:28.424164  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 21:09:28.424226  448211 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 21:09:28.584293  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 21:09:28.664643  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 21:09:28.664721  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 21:09:28.911983  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 21:09:28.912088  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 21:09:29.136439  448211 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 21:09:29.136501  448211 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1202 21:09:29.226244  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:29.377185  448211 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.323310085s)
	I1202 21:09:29.377260  448211 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 21:09:29.429629  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 21:09:29.893626  448211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-656754" context rescaled to 1 replicas
	W1202 21:09:31.254253  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:31.480781  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.109951054s)
	I1202 21:09:31.480899  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.074386859s)
	I1202 21:09:31.480932  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.06185483s)
	I1202 21:09:31.480984  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.966676196s)
	I1202 21:09:31.481063  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.878419367s)
	I1202 21:09:31.481082  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.875354445s)
	I1202 21:09:31.481099  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.870574786s)
	I1202 21:09:31.481117  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.934761677s)
	W1202 21:09:31.568848  448211 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1202 21:09:32.239273  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.589939086s)
	I1202 21:09:32.239567  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.512027594s)
	I1202 21:09:32.239593  448211 addons.go:495] Verifying addon metrics-server=true in "addons-656754"
	I1202 21:09:32.239570  448211 addons.go:495] Verifying addon ingress=true in "addons-656754"
	I1202 21:09:32.239651  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.074687454s)
	I1202 21:09:32.239517  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.558996838s)
	I1202 21:09:32.239958  448211 addons.go:495] Verifying addon registry=true in "addons-656754"
	I1202 21:09:32.239489  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.570931567s)
	I1202 21:09:32.243514  448211 out.go:179] * Verifying registry addon...
	I1202 21:09:32.243522  448211 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-656754 service yakd-dashboard -n yakd-dashboard
	
	I1202 21:09:32.243643  448211 out.go:179] * Verifying ingress addon...
	I1202 21:09:32.247215  448211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 21:09:32.248921  448211 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 21:09:32.258910  448211 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 21:09:32.258934  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:32.259756  448211 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 21:09:32.259776  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:32.329857  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.745467828s)
	W1202 21:09:32.329936  448211 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 21:09:32.329974  448211 retry.go:31] will retry after 180.889217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 21:09:32.511831  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 21:09:32.595901  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.166186746s)
	I1202 21:09:32.595930  448211 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-656754"
	I1202 21:09:32.598966  448211 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 21:09:32.602745  448211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 21:09:32.620952  448211 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 21:09:32.621019  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:32.758629  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:32.759207  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:33.106245  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:33.251902  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:33.252533  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:33.606481  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:33.711559  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:33.750897  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:33.752816  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:34.032055  448211 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 21:09:34.032165  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:34.051075  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:34.107262  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:34.184803  448211 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 21:09:34.197890  448211 addons.go:239] Setting addon gcp-auth=true in "addons-656754"
	I1202 21:09:34.197939  448211 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:09:34.198385  448211 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:09:34.217926  448211 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 21:09:34.217980  448211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:09:34.239513  448211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:09:34.252237  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:34.252879  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:34.606469  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:34.750356  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:34.752435  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:35.106955  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:35.252265  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:35.252527  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:35.312595  448211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.800673169s)
	I1202 21:09:35.312655  448211 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.094704805s)
	I1202 21:09:35.315984  448211 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 21:09:35.318817  448211 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 21:09:35.321602  448211 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 21:09:35.321625  448211 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 21:09:35.336229  448211 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 21:09:35.336255  448211 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 21:09:35.350325  448211 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 21:09:35.350347  448211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 21:09:35.363375  448211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 21:09:35.607261  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:35.757116  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:35.757541  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:35.882846  448211 addons.go:495] Verifying addon gcp-auth=true in "addons-656754"
	I1202 21:09:35.886559  448211 out.go:179] * Verifying gcp-auth addon...
	I1202 21:09:35.890259  448211 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 21:09:35.894296  448211 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 21:09:35.894367  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:36.106646  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:36.211413  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:36.250517  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:36.253006  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:36.393720  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:36.605717  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:36.750837  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:36.753721  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:36.894804  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:37.105855  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:37.250518  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:37.251645  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:37.393489  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:37.606422  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:37.750224  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:37.752584  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:37.893509  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:38.106508  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:38.211502  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:38.250183  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:38.252794  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:38.393596  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:38.606688  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:38.751817  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:38.753124  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:38.893504  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:39.106612  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:39.251415  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:39.251900  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:39.394303  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:39.606542  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:39.750365  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:39.752412  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:39.893842  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:40.105646  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:40.250072  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:40.252127  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:40.393834  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:40.605849  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:40.711590  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:40.750599  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:40.752903  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:40.894105  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:41.105773  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:41.250951  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:41.252561  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:41.393630  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:41.606387  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:41.750449  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:41.752631  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:41.893415  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:42.108778  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:42.251554  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:42.252340  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:42.393182  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:42.606209  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:42.750687  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:42.751458  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:42.893846  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:43.105599  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:43.211551  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:43.250399  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:43.253057  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:43.393307  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:43.606564  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:43.750461  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:43.752289  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:43.893129  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:44.106458  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:44.250210  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:44.252510  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:44.393320  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:44.606334  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:44.751806  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:44.751899  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:44.894080  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:45.107230  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:45.214458  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:45.250984  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:45.253864  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:45.394159  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:45.606653  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:45.750176  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:45.752523  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:45.893302  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:46.107922  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:46.250363  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:46.252366  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:46.393652  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:46.605589  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:46.750300  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:46.752577  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:46.893586  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:47.105607  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:47.250527  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:47.252564  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:47.393755  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:47.609880  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:47.711664  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:47.750286  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:47.751623  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:47.893324  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:48.106119  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:48.250488  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:48.251833  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:48.394044  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:48.605621  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:48.751812  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:48.752030  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:48.894778  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:49.105694  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:49.250103  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:49.252032  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:49.395504  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:49.606427  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:49.751033  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:49.752135  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:49.894473  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:50.106595  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:50.211507  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:50.251625  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:50.252406  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:50.393142  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:50.606311  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:50.750847  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:50.751421  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:50.893994  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:51.105905  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:51.250567  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:51.252554  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:51.394618  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:51.607127  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:51.750544  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:51.751533  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:51.893831  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:52.105671  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:52.211596  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:52.251622  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:52.253262  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:52.394477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:52.605860  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:52.750205  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:52.752170  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:52.893141  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:53.105819  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:53.250361  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:53.252212  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:53.393269  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:53.606356  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:53.751604  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:53.752114  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:53.893936  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:54.105881  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:54.211854  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:54.250762  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:54.251451  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:54.393759  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:54.605513  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:54.750346  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:54.752136  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:54.893262  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:55.113878  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:55.250796  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:55.251525  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:55.393612  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:55.606528  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:55.751040  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:55.752084  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:55.894041  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:56.106177  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:56.215923  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:56.250575  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:56.251890  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:56.395412  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:56.606126  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:56.750472  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:56.751826  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:56.893655  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:57.106657  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:57.251175  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:57.251997  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:57.394192  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:57.606162  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:57.750828  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:57.751562  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:57.893283  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:58.106301  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:58.251574  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:58.251635  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:58.393871  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:58.605549  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:09:58.711463  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:09:58.750268  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:58.752408  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:58.893340  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:59.106333  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:59.251713  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:59.252326  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:59.393318  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:09:59.606038  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:09:59.750777  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:09:59.754331  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:09:59.893544  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:00.108016  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:00.252195  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:00.269625  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:00.394828  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:00.605951  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:10:00.712418  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:10:00.750260  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:00.752615  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:00.893744  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:01.105786  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:01.251333  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:01.251800  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:01.393836  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:01.606411  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:01.750582  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:01.753376  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:01.893611  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:02.107086  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:02.251059  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:02.253524  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:02.393477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:02.607489  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:02.750323  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:02.752488  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:02.893672  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:03.106806  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:10:03.211584  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:10:03.250619  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:03.252833  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:03.394290  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:03.606786  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:03.750285  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:03.752197  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:03.893172  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:04.105959  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:04.251609  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:04.252254  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:04.393213  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:04.606238  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:04.751448  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:04.753570  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:04.893583  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:05.105745  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:05.251201  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:05.252539  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:05.393702  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:05.606732  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1202 21:10:05.711750  448211 node_ready.go:57] node "addons-656754" has "Ready":"False" status (will retry)
	I1202 21:10:05.750682  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:05.751727  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:05.893642  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:06.106595  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:06.250181  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:06.252134  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:06.393184  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:06.606254  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:06.750181  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:06.752493  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:06.893741  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:07.105752  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:07.250806  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:07.252798  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:07.393683  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:07.606715  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:07.744905  448211 node_ready.go:49] node "addons-656754" is "Ready"
	I1202 21:10:07.744937  448211 node_ready.go:38] duration metric: took 40.536721997s for node "addons-656754" to be "Ready" ...
	I1202 21:10:07.744951  448211 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:10:07.745019  448211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:10:07.775481  448211 api_server.go:72] duration metric: took 41.732828612s to wait for apiserver process to appear ...
	I1202 21:10:07.775508  448211 api_server.go:88] waiting for apiserver healthz status ...
	I1202 21:10:07.775528  448211 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 21:10:07.778303  448211 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 21:10:07.778327  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:07.778470  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:07.789029  448211 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 21:10:07.798627  448211 api_server.go:141] control plane version: v1.34.2
	I1202 21:10:07.798659  448211 api_server.go:131] duration metric: took 23.143958ms to wait for apiserver health ...
	I1202 21:10:07.798669  448211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 21:10:07.830103  448211 system_pods.go:59] 19 kube-system pods found
	I1202 21:10:07.830140  448211 system_pods.go:61] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending
	I1202 21:10:07.830147  448211 system_pods.go:61] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:07.830151  448211 system_pods.go:61] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending
	I1202 21:10:07.830156  448211 system_pods.go:61] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending
	I1202 21:10:07.830159  448211 system_pods.go:61] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:07.830164  448211 system_pods.go:61] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:07.830167  448211 system_pods.go:61] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:07.830171  448211 system_pods.go:61] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:07.830175  448211 system_pods.go:61] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending
	I1202 21:10:07.830180  448211 system_pods.go:61] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:07.830184  448211 system_pods.go:61] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:07.830188  448211 system_pods.go:61] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending
	I1202 21:10:07.830195  448211 system_pods.go:61] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:07.830199  448211 system_pods.go:61] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending
	I1202 21:10:07.830203  448211 system_pods.go:61] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending
	I1202 21:10:07.830213  448211 system_pods.go:61] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending
	I1202 21:10:07.830216  448211 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending
	I1202 21:10:07.830220  448211 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending
	I1202 21:10:07.830223  448211 system_pods.go:61] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending
	I1202 21:10:07.830237  448211 system_pods.go:74] duration metric: took 31.560606ms to wait for pod list to return data ...
	I1202 21:10:07.830245  448211 default_sa.go:34] waiting for default service account to be created ...
	I1202 21:10:07.838601  448211 default_sa.go:45] found service account: "default"
	I1202 21:10:07.838635  448211 default_sa.go:55] duration metric: took 8.380143ms for default service account to be created ...
	I1202 21:10:07.838647  448211 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 21:10:07.850634  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:07.850663  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending
	I1202 21:10:07.850669  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:07.850673  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending
	I1202 21:10:07.850677  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending
	I1202 21:10:07.850682  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:07.850686  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:07.850690  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:07.850695  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:07.850699  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending
	I1202 21:10:07.850703  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:07.850708  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:07.850713  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending
	I1202 21:10:07.850725  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:07.850734  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:07.850742  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending
	I1202 21:10:07.850748  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending
	I1202 21:10:07.850751  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending
	I1202 21:10:07.850762  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending
	I1202 21:10:07.850765  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending
	I1202 21:10:07.850778  448211 retry.go:31] will retry after 298.887349ms: missing components: kube-dns
	I1202 21:10:07.898231  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:08.117429  448211 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 21:10:08.117451  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:08.161398  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:08.161443  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:10:08.161451  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:08.161457  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending
	I1202 21:10:08.161462  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending
	I1202 21:10:08.161466  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:08.161472  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:08.161480  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:08.161488  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:08.161495  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:08.161501  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:08.161506  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:08.161517  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending
	I1202 21:10:08.161522  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:08.161527  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:08.161537  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending
	I1202 21:10:08.161542  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending
	I1202 21:10:08.161546  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending
	I1202 21:10:08.161556  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending
	I1202 21:10:08.161561  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 21:10:08.161576  448211 retry.go:31] will retry after 322.72241ms: missing components: kube-dns
	I1202 21:10:08.260579  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:08.269334  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:08.405765  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:08.498989  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:08.499045  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:10:08.499052  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending
	I1202 21:10:08.499060  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 21:10:08.499066  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 21:10:08.499071  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:08.499078  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:08.499086  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:08.499091  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:08.499097  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:08.499106  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:08.499111  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:08.499118  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 21:10:08.499128  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending
	I1202 21:10:08.499134  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:08.499140  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 21:10:08.499150  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 21:10:08.499157  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.499166  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.499174  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 21:10:08.499188  448211 retry.go:31] will retry after 379.485511ms: missing components: kube-dns
	I1202 21:10:08.607041  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:08.757542  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:08.757863  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:08.885214  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:08.885253  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:10:08.885263  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 21:10:08.885271  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 21:10:08.885279  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 21:10:08.885283  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:08.885288  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:08.885293  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:08.885297  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:08.885305  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:08.885308  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:08.885313  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:08.885330  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 21:10:08.885341  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 21:10:08.885350  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:08.885361  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 21:10:08.885367  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 21:10:08.885379  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.885385  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:08.885391  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 21:10:08.885406  448211 retry.go:31] will retry after 462.835389ms: missing components: kube-dns
	I1202 21:10:08.894029  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:09.107030  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:09.250641  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:09.252051  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:09.352064  448211 system_pods.go:86] 19 kube-system pods found
	I1202 21:10:09.352103  448211 system_pods.go:89] "coredns-66bc5c9577-2bvm4" [2a3abe91-f0ba-486e-9a38-0f81a31632eb] Running
	I1202 21:10:09.352123  448211 system_pods.go:89] "csi-hostpath-attacher-0" [5270951d-7d3a-4aeb-b82a-4a96213f8132] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 21:10:09.352130  448211 system_pods.go:89] "csi-hostpath-resizer-0" [cda3aea0-c5f2-4ddf-9dda-b85af3e62490] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 21:10:09.352137  448211 system_pods.go:89] "csi-hostpathplugin-j29dk" [afcea2e4-6486-4ff8-9720-6bb18d51aa2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 21:10:09.352142  448211 system_pods.go:89] "etcd-addons-656754" [faa3787f-d914-4941-a15a-8ae6b7fcb409] Running
	I1202 21:10:09.352146  448211 system_pods.go:89] "kindnet-gvt9x" [2a342b19-2af2-41da-a6dd-282efb6f06f5] Running
	I1202 21:10:09.352151  448211 system_pods.go:89] "kube-apiserver-addons-656754" [609a50cd-3091-4ea3-a29d-e7d4fed0d159] Running
	I1202 21:10:09.352155  448211 system_pods.go:89] "kube-controller-manager-addons-656754" [da82a3e9-6da6-4b16-bfd4-c02410756f17] Running
	I1202 21:10:09.352161  448211 system_pods.go:89] "kube-ingress-dns-minikube" [e204af51-876d-4b22-b8c1-a89dcc9c2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 21:10:09.352165  448211 system_pods.go:89] "kube-proxy-zqc2s" [63a8ab28-deac-47b6-b80f-3027dbda685d] Running
	I1202 21:10:09.352170  448211 system_pods.go:89] "kube-scheduler-addons-656754" [8ab0416d-b4f4-4ba6-9ac2-e8557c1d6f04] Running
	I1202 21:10:09.352176  448211 system_pods.go:89] "metrics-server-85b7d694d7-bsktp" [829aeaf5-84d1-4167-a63c-e8d5f30a05e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 21:10:09.352187  448211 system_pods.go:89] "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 21:10:09.352193  448211 system_pods.go:89] "registry-6b586f9694-gbhfb" [44969411-94b0-4c68-8b2a-863d5769849e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 21:10:09.352199  448211 system_pods.go:89] "registry-creds-764b6fb674-bgqc9" [b9193e40-6002-48d5-8fce-7e6beaee342f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 21:10:09.352207  448211 system_pods.go:89] "registry-proxy-2zlcv" [9169f753-02a4-4e49-8306-b4d10234061b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 21:10:09.352213  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2fl6z" [e70b7374-a3fb-4c13-b332-06d815975689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:09.352219  448211 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgbl5" [31b1bd50-e4c1-4660-a12d-863e957b53eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 21:10:09.352226  448211 system_pods.go:89] "storage-provisioner" [d6352c3f-4076-4683-ad9f-975333da159c] Running
	I1202 21:10:09.352235  448211 system_pods.go:126] duration metric: took 1.513581205s to wait for k8s-apps to be running ...
	I1202 21:10:09.352246  448211 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 21:10:09.352303  448211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:10:09.365599  448211 system_svc.go:56] duration metric: took 13.344222ms WaitForService to wait for kubelet
	I1202 21:10:09.365686  448211 kubeadm.go:587] duration metric: took 43.323050112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:10:09.365711  448211 node_conditions.go:102] verifying NodePressure condition ...
	I1202 21:10:09.368551  448211 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 21:10:09.368582  448211 node_conditions.go:123] node cpu capacity is 2
	I1202 21:10:09.368596  448211 node_conditions.go:105] duration metric: took 2.878795ms to run NodePressure ...
	I1202 21:10:09.368609  448211 start.go:242] waiting for startup goroutines ...
	I1202 21:10:09.393620  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:09.606258  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:09.751670  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:09.753196  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:09.895118  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:10.106888  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:10.253076  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:10.253622  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:10.396497  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:10.607091  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:10.753316  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:10.753569  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:10.893917  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:11.106477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:11.251477  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:11.254565  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:11.393584  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:11.607615  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:11.753237  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:11.753960  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:11.894137  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:12.106658  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:12.251993  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:12.253613  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:12.393910  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:12.606379  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:12.750363  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:12.752766  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:12.894855  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:13.106546  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:13.250726  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:13.253401  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:13.394066  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:13.606454  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:13.750748  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:13.753852  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:13.894345  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:14.107093  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:14.253092  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:14.255345  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:14.393279  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:14.606620  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:14.753609  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:14.753794  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:14.894275  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:15.119375  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:15.251661  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:15.253724  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:15.394039  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:15.613339  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:15.760215  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:15.760653  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:15.894889  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:16.110229  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:16.256911  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:16.257303  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:16.393823  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:16.607533  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:16.750484  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:16.754120  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:16.894579  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:17.130185  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:17.277653  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:17.278079  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:17.401029  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:17.606246  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:17.751146  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:17.756643  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:17.894037  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:18.106865  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:18.253130  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:18.253299  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:18.393605  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:18.607170  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:18.751573  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:18.752247  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:18.893002  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:19.106778  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:19.253370  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:19.254495  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:19.394549  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:19.607704  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:19.753895  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:19.754280  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:19.894940  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:20.111418  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:20.250992  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:20.251809  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:20.393819  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:20.606504  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:20.751564  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:20.753135  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:20.894429  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:21.111525  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:21.251143  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:21.252215  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:21.394206  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:21.606762  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:21.752369  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:21.752929  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:21.893765  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:22.105986  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:22.251529  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:22.253892  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:22.398487  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:22.607472  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:22.752198  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:22.754080  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:22.894288  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:23.106913  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:23.252079  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:23.252560  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:23.393481  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:23.609184  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:23.750300  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:23.752498  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:23.893818  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:24.108277  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:24.250382  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:24.252607  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:24.393902  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:24.607335  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:24.750709  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:24.753924  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:24.894015  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:25.108082  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:25.252099  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:25.254337  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:25.393899  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:25.608776  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:25.753876  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:25.754666  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:25.895080  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:26.130961  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:26.266676  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:26.266784  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:26.397905  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:26.607191  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:26.750902  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:26.752782  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:26.893788  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:27.108038  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:27.257417  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:27.257585  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:27.393973  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:27.608805  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:27.757704  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:27.758205  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:27.893996  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:28.106763  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:28.253390  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:28.253865  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:28.395236  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:28.607094  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:28.760039  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:28.760623  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:28.894408  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:29.107309  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:29.260275  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:29.260676  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:29.393891  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:29.606822  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:29.751594  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:29.752896  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:29.894786  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:30.107145  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:30.250588  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:30.253217  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:30.394373  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:30.607263  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:30.751404  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:30.753175  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:30.894254  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:31.107074  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:31.253124  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:31.253446  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:31.393709  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:31.606531  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:31.751593  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:31.753170  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:31.895179  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:32.107632  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:32.252495  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:32.252802  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:32.394167  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:32.612343  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:32.752352  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:32.753770  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:32.893962  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:33.106586  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:33.251585  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:33.253622  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:33.394896  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:33.607576  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:33.750041  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:33.752686  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:33.893656  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:34.106004  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:34.252060  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:34.252272  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:34.393827  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:34.606578  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:34.751779  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:34.752074  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:34.893767  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:35.106028  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:35.253195  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:35.253394  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:35.393359  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:35.606238  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:35.751348  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:35.753259  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:35.893398  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:36.107094  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:36.249806  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:36.252166  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:36.393066  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:36.606667  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:36.750904  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:36.751919  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:36.894493  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:37.106960  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:37.251071  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:37.253759  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:37.399395  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:37.606834  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:37.751514  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:37.752267  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:37.893243  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:38.106762  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:38.250988  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:38.253481  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:38.393961  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:38.608605  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:38.751642  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:38.752376  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:38.893497  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:39.106565  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:39.250824  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:39.252652  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:39.393219  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:39.606162  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:39.751529  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:39.752289  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:39.893234  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:40.107330  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:40.251917  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:40.254276  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:40.395637  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:40.606118  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:40.750554  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:40.752430  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:40.893181  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:41.105900  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:41.258159  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:41.258565  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:41.393406  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:41.606380  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:41.750111  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:41.752438  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:41.893789  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:42.106733  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:42.251475  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:42.252052  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:42.395654  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:42.606107  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:42.749993  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:42.752373  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:42.894324  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:43.107756  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:43.257255  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:43.259200  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:43.394274  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:43.607058  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:43.750989  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:43.753501  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:43.893215  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:44.106748  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:44.256115  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:44.256143  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:44.393965  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:44.606677  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:44.752474  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:44.752685  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:44.893684  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:45.107746  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:45.253176  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:45.253606  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:45.394101  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:45.606613  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:45.752852  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:45.753887  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:45.894217  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:46.107526  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:46.252726  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:46.253550  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:46.394164  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:46.611148  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:46.753693  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:46.754970  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:46.894649  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:47.106764  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:47.252601  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:47.252704  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:47.393752  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:47.607656  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:47.750735  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:47.753047  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:47.894355  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:48.107266  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:48.250780  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 21:10:48.252243  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:48.393991  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:48.606243  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:48.750698  448211 kapi.go:107] duration metric: took 1m16.503495075s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 21:10:48.752908  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:48.893692  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:49.106891  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:49.252941  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:49.394290  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:49.606880  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:49.752361  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:49.893789  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:50.106484  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:50.252555  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:50.393460  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:50.607139  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:50.753151  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:50.893359  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:51.108386  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:51.253159  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:51.394889  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:51.607332  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:51.753447  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:51.893783  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:52.107840  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:52.251983  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:52.403488  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:52.621476  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:52.753925  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:52.896773  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:53.107312  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:53.253616  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:53.394650  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:53.605756  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:53.751951  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:53.893920  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:54.106867  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:54.251825  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:54.393709  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:54.606827  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:54.753452  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:54.893241  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:55.107167  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:55.256179  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:55.393676  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:55.605952  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:55.752206  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:55.893819  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:56.106248  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:56.252343  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:56.394925  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:56.607073  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:56.752679  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:56.893460  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:57.107070  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:57.252459  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:57.394945  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:57.606630  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:57.753173  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:57.895458  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:58.108339  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:58.252679  448211 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 21:10:58.395252  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:58.607081  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:58.752530  448211 kapi.go:107] duration metric: took 1m26.503605645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 21:10:58.893818  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:59.106404  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:59.394145  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:10:59.606257  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:10:59.893918  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:00.107978  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:00.396258  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:00.607267  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:00.895392  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:01.107262  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:01.403449  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:01.606877  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:01.894434  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:02.106878  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:02.393863  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:02.606588  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:02.894539  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:03.105865  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:03.393314  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:03.606612  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:03.893823  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:04.106054  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:04.393228  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:04.614896  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:04.894487  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:05.106948  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 21:11:05.393802  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:05.606883  448211 kapi.go:107] duration metric: took 1m33.004138306s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 21:11:05.898562  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:06.394344  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:06.893843  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:07.394123  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:07.893143  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:08.393701  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:08.894325  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:09.393814  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:09.894318  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:10.405311  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:10.894431  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:11.394400  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:11.894097  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:12.393818  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:12.893431  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:13.394284  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:13.893751  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:14.397300  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:14.894006  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:15.393721  448211 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 21:11:15.893545  448211 kapi.go:107] duration metric: took 1m40.003286776s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 21:11:15.896512  448211 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-656754 cluster.
	I1202 21:11:15.899361  448211 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 21:11:15.902171  448211 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 21:11:15.905287  448211 out.go:179] * Enabled addons: inspektor-gadget, nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, cloud-spanner, registry-creds, storage-provisioner-rancher, metrics-server, ingress-dns, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1202 21:11:15.908139  448211 addons.go:530] duration metric: took 1m49.865030127s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin storage-provisioner amd-gpu-device-plugin cloud-spanner registry-creds storage-provisioner-rancher metrics-server ingress-dns yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1202 21:11:15.908194  448211 start.go:247] waiting for cluster config update ...
	I1202 21:11:15.908219  448211 start.go:256] writing updated cluster config ...
	I1202 21:11:15.908519  448211 ssh_runner.go:195] Run: rm -f paused
	I1202 21:11:15.913699  448211 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 21:11:15.916933  448211 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2bvm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.921191  448211 pod_ready.go:94] pod "coredns-66bc5c9577-2bvm4" is "Ready"
	I1202 21:11:15.921217  448211 pod_ready.go:86] duration metric: took 4.258354ms for pod "coredns-66bc5c9577-2bvm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.923383  448211 pod_ready.go:83] waiting for pod "etcd-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.927785  448211 pod_ready.go:94] pod "etcd-addons-656754" is "Ready"
	I1202 21:11:15.927813  448211 pod_ready.go:86] duration metric: took 4.403275ms for pod "etcd-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.929950  448211 pod_ready.go:83] waiting for pod "kube-apiserver-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.936054  448211 pod_ready.go:94] pod "kube-apiserver-addons-656754" is "Ready"
	I1202 21:11:15.936080  448211 pod_ready.go:86] duration metric: took 6.095011ms for pod "kube-apiserver-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:15.938108  448211 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:16.317100  448211 pod_ready.go:94] pod "kube-controller-manager-addons-656754" is "Ready"
	I1202 21:11:16.317133  448211 pod_ready.go:86] duration metric: took 379.000587ms for pod "kube-controller-manager-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:16.518509  448211 pod_ready.go:83] waiting for pod "kube-proxy-zqc2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:16.917654  448211 pod_ready.go:94] pod "kube-proxy-zqc2s" is "Ready"
	I1202 21:11:16.917685  448211 pod_ready.go:86] duration metric: took 399.147304ms for pod "kube-proxy-zqc2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:17.118126  448211 pod_ready.go:83] waiting for pod "kube-scheduler-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:17.517697  448211 pod_ready.go:94] pod "kube-scheduler-addons-656754" is "Ready"
	I1202 21:11:17.517724  448211 pod_ready.go:86] duration metric: took 399.569338ms for pod "kube-scheduler-addons-656754" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:11:17.517739  448211 pod_ready.go:40] duration metric: took 1.604009204s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 21:11:17.581683  448211 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 21:11:17.585158  448211 out.go:179] * Done! kubectl is now configured to use "addons-656754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 21:12:06 addons-656754 crio[828]: time="2025-12-02T21:12:06.668968703Z" level=info msg="Stopping pod sandbox: 1599687cf58b6209dc6c6f9fb1085e2f9d3c615a3705b36ae161aa1d8e4aa738" id=0fb0bf7c-8aa4-4b06-80a2-bf6f6832a921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:12:06 addons-656754 crio[828]: time="2025-12-02T21:12:06.669288222Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-3df1e97b-8903-4317-b848-7da6166c304a Namespace:local-path-storage ID:1599687cf58b6209dc6c6f9fb1085e2f9d3c615a3705b36ae161aa1d8e4aa738 UID:58ebcbdc-d54b-4b82-963d-350dab2e13d2 NetNS:/var/run/netns/3fd3c27e-9ce0-48be-8ca4-56444609cf3f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400127dcc0}] Aliases:map[]}"
	Dec 02 21:12:06 addons-656754 crio[828]: time="2025-12-02T21:12:06.669431206Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-3df1e97b-8903-4317-b848-7da6166c304a from CNI network \"kindnet\" (type=ptp)"
	Dec 02 21:12:06 addons-656754 crio[828]: time="2025-12-02T21:12:06.693126703Z" level=info msg="Stopped pod sandbox: 1599687cf58b6209dc6c6f9fb1085e2f9d3c615a3705b36ae161aa1d8e4aa738" id=0fb0bf7c-8aa4-4b06-80a2-bf6f6832a921 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:12:07 addons-656754 crio[828]: time="2025-12-02T21:12:07.676317329Z" level=info msg="Removing container: ebbc4d6633ad9f2af5811b4fe31a2f24e50c3de1947455117c50af0688ad0b28" id=e4986f82-d44f-4de4-b3ec-3bd70b2f170d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:12:07 addons-656754 crio[828]: time="2025-12-02T21:12:07.678607841Z" level=info msg="Error loading conmon cgroup of container ebbc4d6633ad9f2af5811b4fe31a2f24e50c3de1947455117c50af0688ad0b28: cgroup deleted" id=e4986f82-d44f-4de4-b3ec-3bd70b2f170d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:12:07 addons-656754 crio[828]: time="2025-12-02T21:12:07.68378792Z" level=info msg="Removed container ebbc4d6633ad9f2af5811b4fe31a2f24e50c3de1947455117c50af0688ad0b28: local-path-storage/helper-pod-delete-pvc-3df1e97b-8903-4317-b848-7da6166c304a/helper-pod" id=e4986f82-d44f-4de4-b3ec-3bd70b2f170d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.303712354Z" level=info msg="Stopping container: 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3 (timeout: 30s)" id=af4ec85e-f54a-4417-a33a-e55343fed8ba name=/runtime.v1.RuntimeService/StopContainer
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.416684271Z" level=info msg="Stopped container 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3: default/task-pv-pod/task-pv-container" id=af4ec85e-f54a-4417-a33a-e55343fed8ba name=/runtime.v1.RuntimeService/StopContainer
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.417442614Z" level=info msg="Stopping pod sandbox: 0d2159dca275bd3c51a4e762b235ed4ea8221ef704f944d5ad24a8d89313c8ba" id=35b346aa-eb15-46ab-82ff-ceee2d27b917 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.417719081Z" level=info msg="Got pod network &{Name:task-pv-pod Namespace:default ID:0d2159dca275bd3c51a4e762b235ed4ea8221ef704f944d5ad24a8d89313c8ba UID:040d5f44-476b-42c5-82a6-9291342d8f5f NetNS:/var/run/netns/c91c159a-c9b2-4529-aa7b-512ed30318c0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b250}] Aliases:map[]}"
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.417871017Z" level=info msg="Deleting pod default_task-pv-pod from CNI network \"kindnet\" (type=ptp)"
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.449523534Z" level=info msg="Stopped pod sandbox: 0d2159dca275bd3c51a4e762b235ed4ea8221ef704f944d5ad24a8d89313c8ba" id=35b346aa-eb15-46ab-82ff-ceee2d27b917 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.688839499Z" level=info msg="Removing container: 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3" id=6702f9cd-d35b-42ad-b766-ae3c6f86aa93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.710171465Z" level=info msg="Error loading conmon cgroup of container 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3: cgroup deleted" id=6702f9cd-d35b-42ad-b766-ae3c6f86aa93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:12:09 addons-656754 crio[828]: time="2025-12-02T21:12:09.727710166Z" level=info msg="Removed container 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3: default/task-pv-pod/task-pv-container" id=6702f9cd-d35b-42ad-b766-ae3c6f86aa93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.82098842Z" level=info msg="Running pod sandbox: default/task-pv-pod-restore/POD" id=17710ea3-b742-47c6-bfec-4709998f7103 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.821056392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.835448645Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:97f4c84119e58d05a555c23c6fe0f16ee8d60db9b3c554fec817d0772638ec64 UID:5aa63b05-401e-4b7e-8f3e-cfe64abb8b17 NetNS:/var/run/netns/43a9d9a9-f4ec-4f4f-b692-b130b6821d40 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40040f9000}] Aliases:map[]}"
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.835626887Z" level=info msg="Adding pod default_task-pv-pod-restore to CNI network \"kindnet\" (type=ptp)"
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.845265773Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:97f4c84119e58d05a555c23c6fe0f16ee8d60db9b3c554fec817d0772638ec64 UID:5aa63b05-401e-4b7e-8f3e-cfe64abb8b17 NetNS:/var/run/netns/43a9d9a9-f4ec-4f4f-b692-b130b6821d40 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40040f9000}] Aliases:map[]}"
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.845594843Z" level=info msg="Checking pod default_task-pv-pod-restore for CNI network kindnet (type=ptp)"
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.860962885Z" level=info msg="Ran pod sandbox 97f4c84119e58d05a555c23c6fe0f16ee8d60db9b3c554fec817d0772638ec64 with infra container: default/task-pv-pod-restore/POD" id=17710ea3-b742-47c6-bfec-4709998f7103 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.869443252Z" level=info msg="Pulling image: docker.io/nginx:latest" id=a35520e3-d9f1-48d5-bbfb-346f1f5f6327 name=/runtime.v1.ImageService/PullImage
	Dec 02 21:12:12 addons-656754 crio[828]: time="2025-12-02T21:12:12.871204421Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	1c7e91226b3e5       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            11 seconds ago       Exited              busybox                                  0                   9316f207b09d6       test-local-path                                              default
	33ac7ecb29b90       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            20 seconds ago       Exited              helper-pod                               0                   611282277d926       helper-pod-create-pvc-3df1e97b-8903-4317-b848-7da6166c304a   local-path-storage
	dd3c5e853140d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          52 seconds ago       Running             busybox                                  0                   b184bbc27f328       busybox                                                      default
	058e26f7cd421       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 57 seconds ago       Running             gcp-auth                                 0                   ca9df7a63d8ac       gcp-auth-78565c9fb4-qclvf                                    gcp-auth
	dc64ec80fd551       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    3                   3d829a02a5007       gcp-auth-certs-patch-hxfnv                                   gcp-auth
	bbdebabfcb42b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          About a minute ago   Running             csi-snapshotter                          0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                                     kube-system
	f21eb5e720d99       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                                     kube-system
	7fb33e06679de       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                                     kube-system
	2a54239c986b6       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                                     kube-system
	c7438d482556e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                                     kube-system
	b5bf09863cdee       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             About a minute ago   Running             controller                               0                   7665f9a4fb4d8       ingress-nginx-controller-6c8bf45fb-vdzzc                     ingress-nginx
	075f5a4ebaab7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   f6b4ce447dcc8       gadget-qk5vw                                                 gadget
	1047a51792cd7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   617c5d06767e2       registry-proxy-2zlcv                                         kube-system
	a0bf837335fef       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   f1fa50166ee69       csi-hostpath-attacher-0                                      kube-system
	9c0b0e08f43aa       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    2                   92966298d3b25       ingress-nginx-admission-patch-2fnsb                          ingress-nginx
	7f1f14868c074       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c2e307d3f8155       snapshot-controller-7d9fbc56b8-cgbl5                         kube-system
	eadd870941895       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   49ad3652b4bf4       csi-hostpathplugin-j29dk                                     kube-system
	6b9323a78a161       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   5b168a0bfa7fa       snapshot-controller-7d9fbc56b8-2fl6z                         kube-system
	346d71544b514       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   d581658c076a4       csi-hostpath-resizer-0                                       kube-system
	eac5adf21505c       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   52f0c46f32c8c       nvidia-device-plugin-daemonset-gmn2x                         kube-system
	3e550292e1371       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   68efd17f7e901       registry-6b586f9694-gbhfb                                    kube-system
	c6e8e65b52e2c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   3c8134dc2be06       ingress-nginx-admission-create-mt6ld                         ingress-nginx
	0ea8245394cbd       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   a0351cc8a1031       yakd-dashboard-5ff678cb9-znnvc                               yakd-dashboard
	27c1564d21921       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   3f0e9bc019d23       kube-ingress-dns-minikube                                    kube-system
	9110e1016ee8f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   084de1db7dd14       local-path-provisioner-648f6765c9-6pxcn                      local-path-storage
	a6bae13c92728       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   8f93d5c3c5c6a       metrics-server-85b7d694d7-bsktp                              kube-system
	7b4811c87b3a1       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               2 minutes ago        Running             cloud-spanner-emulator                   0                   55e15ef64d8d5       cloud-spanner-emulator-5bdddb765-qldsf                       default
	8fbc644a70c18       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             2 minutes ago        Running             coredns                                  0                   ffc0db9a71f6e       coredns-66bc5c9577-2bvm4                                     kube-system
	507385b0545f3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             2 minutes ago        Running             storage-provisioner                      0                   90219c938a4de       storage-provisioner                                          kube-system
	6557f84007b18       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   caa8eb5f927fa       kube-proxy-zqc2s                                             kube-system
	4767c189dbb1d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   de085f245dcae       kindnet-gvt9x                                                kube-system
	3a609b1131be3       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   66b640e50d6bd       kube-controller-manager-addons-656754                        kube-system
	17a3bd5107c3d       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   d129dec847e42       etcd-addons-656754                                           kube-system
	870c81e888423       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   2668be2759f0c       kube-apiserver-addons-656754                                 kube-system
	8ccf79252e522       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   f6a2619ed74d3       kube-scheduler-addons-656754                                 kube-system
	
	
	==> coredns [8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3] <==
	[INFO] 10.244.0.18:53741 - 47462 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002736795s
	[INFO] 10.244.0.18:53741 - 23775 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145462s
	[INFO] 10.244.0.18:53741 - 63222 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000140983s
	[INFO] 10.244.0.18:60337 - 56977 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151427s
	[INFO] 10.244.0.18:60337 - 57207 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187219s
	[INFO] 10.244.0.18:57145 - 4415 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113404s
	[INFO] 10.244.0.18:57145 - 4835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000279002s
	[INFO] 10.244.0.18:55259 - 54747 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091234s
	[INFO] 10.244.0.18:55259 - 54911 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082315s
	[INFO] 10.244.0.18:59313 - 60811 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.000884137s
	[INFO] 10.244.0.18:59313 - 61000 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001388036s
	[INFO] 10.244.0.18:38522 - 50221 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156384s
	[INFO] 10.244.0.18:38522 - 50090 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087049s
	[INFO] 10.244.0.21:60056 - 28281 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157113s
	[INFO] 10.244.0.21:56128 - 7187 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072329s
	[INFO] 10.244.0.21:40780 - 60258 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096739s
	[INFO] 10.244.0.21:40779 - 2755 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000064838s
	[INFO] 10.244.0.21:46290 - 6481 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008293s
	[INFO] 10.244.0.21:37281 - 63024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076809s
	[INFO] 10.244.0.21:38058 - 23165 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002161182s
	[INFO] 10.244.0.21:34499 - 16120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001543271s
	[INFO] 10.244.0.21:51715 - 41965 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000695729s
	[INFO] 10.244.0.21:55075 - 44185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002695784s
	[INFO] 10.244.0.23:38429 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000340196s
	[INFO] 10.244.0.23:42091 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012828s
	
	
	==> describe nodes <==
	Name:               addons-656754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-656754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=addons-656754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T21_09_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-656754
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-656754"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 21:09:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-656754
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 21:12:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 21:11:54 +0000   Tue, 02 Dec 2025 21:09:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 21:11:54 +0000   Tue, 02 Dec 2025 21:09:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 21:11:54 +0000   Tue, 02 Dec 2025 21:09:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 21:11:54 +0000   Tue, 02 Dec 2025 21:10:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-656754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                31dfc91e-3dfb-4d63-a545-376482e19a5f
	  Boot ID:                    c77b83b8-287c-4d91-bf3a-e2991f41400e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  default                     cloud-spanner-emulator-5bdddb765-qldsf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-qk5vw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  gcp-auth                    gcp-auth-78565c9fb4-qclvf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-vdzzc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m41s
	  kube-system                 coredns-66bc5c9577-2bvm4                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m47s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 csi-hostpathplugin-j29dk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 etcd-addons-656754                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m52s
	  kube-system                 kindnet-gvt9x                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m47s
	  kube-system                 kube-apiserver-addons-656754                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-controller-manager-addons-656754       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-proxy-zqc2s                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-scheduler-addons-656754                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 metrics-server-85b7d694d7-bsktp             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m43s
	  kube-system                 nvidia-device-plugin-daemonset-gmn2x        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-6b586f9694-gbhfb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 registry-creds-764b6fb674-bgqc9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 registry-proxy-2zlcv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-2fl6z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 snapshot-controller-7d9fbc56b8-cgbl5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  local-path-storage          local-path-provisioner-648f6765c9-6pxcn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-znnvc              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 2m46s            kube-proxy       
	  Normal   Starting                 3m               kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m (x8 over 3m)  kubelet          Node addons-656754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m (x8 over 3m)  kubelet          Node addons-656754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m (x8 over 3m)  kubelet          Node addons-656754 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m53s            kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m53s            kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m52s            kubelet          Node addons-656754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m52s            kubelet          Node addons-656754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m52s            kubelet          Node addons-656754 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m49s            node-controller  Node addons-656754 event: Registered Node addons-656754 in Controller
	  Normal   NodeReady                2m6s             kubelet          Node addons-656754 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 18:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6] <==
	{"level":"warn","ts":"2025-12-02T21:09:16.602876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.621523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.650146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.688522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.728039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.755353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.776825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.807244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.857566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.868169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.891237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.905790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.928266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.943231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.968936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:16.995983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:17.008986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:17.051367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:17.129149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:32.923907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:32.943921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.016321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.034288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.084781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:09:55.119425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [058e26f7cd4215cb1afa2046249fb1992e03c9dc587ec4e53f4ba748a2174521] <==
	2025/12/02 21:11:15 GCP Auth Webhook started!
	2025/12/02 21:11:18 Ready to marshal response ...
	2025/12/02 21:11:18 Ready to write response ...
	2025/12/02 21:11:18 Ready to marshal response ...
	2025/12/02 21:11:18 Ready to write response ...
	2025/12/02 21:11:18 Ready to marshal response ...
	2025/12/02 21:11:18 Ready to write response ...
	2025/12/02 21:11:40 Ready to marshal response ...
	2025/12/02 21:11:40 Ready to write response ...
	2025/12/02 21:11:51 Ready to marshal response ...
	2025/12/02 21:11:51 Ready to write response ...
	2025/12/02 21:11:51 Ready to marshal response ...
	2025/12/02 21:11:51 Ready to write response ...
	2025/12/02 21:11:56 Ready to marshal response ...
	2025/12/02 21:11:56 Ready to write response ...
	2025/12/02 21:12:04 Ready to marshal response ...
	2025/12/02 21:12:04 Ready to write response ...
	2025/12/02 21:12:12 Ready to marshal response ...
	2025/12/02 21:12:12 Ready to write response ...
	
	
	==> kernel <==
	 21:12:13 up  2:54,  0 user,  load average: 1.84, 1.84, 1.68
	Linux addons-656754 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513] <==
	I1202 21:10:07.040061       1 main.go:301] handling current node
	I1202 21:10:17.039370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:10:17.039402       1 main.go:301] handling current node
	I1202 21:10:27.039134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:10:27.039163       1 main.go:301] handling current node
	I1202 21:10:37.039469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:10:37.039520       1 main.go:301] handling current node
	I1202 21:10:47.039330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:10:47.039377       1 main.go:301] handling current node
	I1202 21:10:57.039934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:10:57.039962       1 main.go:301] handling current node
	I1202 21:11:07.040145       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:11:07.040199       1 main.go:301] handling current node
	I1202 21:11:17.040053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:11:17.040093       1 main.go:301] handling current node
	I1202 21:11:27.039333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:11:27.039372       1 main.go:301] handling current node
	I1202 21:11:37.039302       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:11:37.039356       1 main.go:301] handling current node
	I1202 21:11:47.039902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:11:47.039939       1 main.go:301] handling current node
	I1202 21:11:57.039319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:11:57.039359       1 main.go:301] handling current node
	I1202 21:12:07.040058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:12:07.040097       1 main.go:301] handling current node
	
	
	==> kube-apiserver [870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754] <==
	I1202 21:09:32.542900       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.229.140"}
	W1202 21:09:32.922101       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1202 21:09:32.938410       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1202 21:09:35.741250       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.14.36"}
	W1202 21:09:55.016128       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:09:55.034040       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:09:55.084916       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:09:55.115672       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 21:10:07.673457       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.14.36:443: connect: connection refused
	E1202 21:10:07.673504       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.14.36:443: connect: connection refused" logger="UnhandledError"
	W1202 21:10:07.681579       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.14.36:443: connect: connection refused
	E1202 21:10:07.681616       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.14.36:443: connect: connection refused" logger="UnhandledError"
	W1202 21:10:07.785060       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.14.36:443: connect: connection refused
	E1202 21:10:07.785197       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.14.36:443: connect: connection refused" logger="UnhandledError"
	E1202 21:10:26.245860       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.154.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.154.108:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.154.108:443: connect: connection refused" logger="UnhandledError"
	W1202 21:10:26.246605       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 21:10:26.246668       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 21:10:26.330314       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 21:11:28.598430       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34350: use of closed network connection
	E1202 21:11:28.820675       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34372: use of closed network connection
	E1202 21:11:28.957389       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34400: use of closed network connection
	I1202 21:12:08.129631       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6] <==
	I1202 21:09:25.019530       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 21:09:25.021817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 21:09:25.021838       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 21:09:25.021847       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 21:09:25.022473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 21:09:25.022513       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 21:09:25.022882       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 21:09:25.024035       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 21:09:25.024271       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 21:09:25.024772       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 21:09:25.027265       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 21:09:25.030184       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 21:09:25.030339       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 21:09:25.030635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1202 21:09:30.539657       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1202 21:09:54.998598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 21:09:54.998766       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 21:09:54.998837       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 21:09:55.033181       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1202 21:09:55.048355       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 21:09:55.102508       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 21:09:55.151374       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 21:10:09.969131       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1202 21:10:25.109693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 21:10:25.159987       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd] <==
	I1202 21:09:26.954182       1 server_linux.go:53] "Using iptables proxy"
	I1202 21:09:27.097627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 21:09:27.197917       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 21:09:27.197955       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 21:09:27.198040       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 21:09:27.249572       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 21:09:27.249632       1 server_linux.go:132] "Using iptables Proxier"
	I1202 21:09:27.257633       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 21:09:27.260407       1 server.go:527] "Version info" version="v1.34.2"
	I1202 21:09:27.260430       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 21:09:27.269791       1 config.go:106] "Starting endpoint slice config controller"
	I1202 21:09:27.269813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 21:09:27.270105       1 config.go:200] "Starting service config controller"
	I1202 21:09:27.270112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 21:09:27.270411       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 21:09:27.270421       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 21:09:27.270812       1 config.go:309] "Starting node config controller"
	I1202 21:09:27.270819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 21:09:27.270825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 21:09:27.370331       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 21:09:27.370404       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 21:09:27.370579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d] <==
	E1202 21:09:18.091535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 21:09:18.091534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 21:09:18.091631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 21:09:18.091635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 21:09:18.091676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 21:09:18.091792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 21:09:18.093815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 21:09:18.093935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 21:09:18.913935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 21:09:18.938717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 21:09:18.982354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 21:09:19.018351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 21:09:19.098295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 21:09:19.151613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 21:09:19.194164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 21:09:19.203807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 21:09:19.209169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 21:09:19.221514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 21:09:19.228265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 21:09:19.247712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 21:09:19.248865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 21:09:19.256147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 21:09:19.299542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 21:09:19.641388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1202 21:09:21.761268       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 21:12:06 addons-656754 kubelet[1256]: I1202 21:12:06.851263    1256 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4kvb\" (UniqueName: \"kubernetes.io/projected/58ebcbdc-d54b-4b82-963d-350dab2e13d2-kube-api-access-w4kvb\") on node \"addons-656754\" DevicePath \"\""
	Dec 02 21:12:07 addons-656754 kubelet[1256]: I1202 21:12:07.674762    1256 scope.go:117] "RemoveContainer" containerID="ebbc4d6633ad9f2af5811b4fe31a2f24e50c3de1947455117c50af0688ad0b28"
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.472790    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/040d5f44-476b-42c5-82a6-9291342d8f5f-gcp-creds\") pod \"040d5f44-476b-42c5-82a6-9291342d8f5f\" (UID: \"040d5f44-476b-42c5-82a6-9291342d8f5f\") "
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.472874    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bh8c\" (UniqueName: \"kubernetes.io/projected/040d5f44-476b-42c5-82a6-9291342d8f5f-kube-api-access-7bh8c\") pod \"040d5f44-476b-42c5-82a6-9291342d8f5f\" (UID: \"040d5f44-476b-42c5-82a6-9291342d8f5f\") "
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.472866    1256 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/040d5f44-476b-42c5-82a6-9291342d8f5f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "040d5f44-476b-42c5-82a6-9291342d8f5f" (UID: "040d5f44-476b-42c5-82a6-9291342d8f5f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.472985    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^87e3c210-cfc3-11f0-a1a6-12fd298fba74\") pod \"040d5f44-476b-42c5-82a6-9291342d8f5f\" (UID: \"040d5f44-476b-42c5-82a6-9291342d8f5f\") "
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.473082    1256 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/040d5f44-476b-42c5-82a6-9291342d8f5f-gcp-creds\") on node \"addons-656754\" DevicePath \"\""
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.483708    1256 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^87e3c210-cfc3-11f0-a1a6-12fd298fba74" (OuterVolumeSpecName: "task-pv-storage") pod "040d5f44-476b-42c5-82a6-9291342d8f5f" (UID: "040d5f44-476b-42c5-82a6-9291342d8f5f"). InnerVolumeSpecName "pvc-c749ee56-a424-44a6-bc49-62efae12c4b5". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.484151    1256 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040d5f44-476b-42c5-82a6-9291342d8f5f-kube-api-access-7bh8c" (OuterVolumeSpecName: "kube-api-access-7bh8c") pod "040d5f44-476b-42c5-82a6-9291342d8f5f" (UID: "040d5f44-476b-42c5-82a6-9291342d8f5f"). InnerVolumeSpecName "kube-api-access-7bh8c". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.573600    1256 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bh8c\" (UniqueName: \"kubernetes.io/projected/040d5f44-476b-42c5-82a6-9291342d8f5f-kube-api-access-7bh8c\") on node \"addons-656754\" DevicePath \"\""
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.573667    1256 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-c749ee56-a424-44a6-bc49-62efae12c4b5\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^87e3c210-cfc3-11f0-a1a6-12fd298fba74\") on node \"addons-656754\" "
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.579942    1256 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-c749ee56-a424-44a6-bc49-62efae12c4b5" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^87e3c210-cfc3-11f0-a1a6-12fd298fba74") on node "addons-656754"
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.674006    1256 reconciler_common.go:299] "Volume detached for volume \"pvc-c749ee56-a424-44a6-bc49-62efae12c4b5\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^87e3c210-cfc3-11f0-a1a6-12fd298fba74\") on node \"addons-656754\" DevicePath \"\""
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.686067    1256 scope.go:117] "RemoveContainer" containerID="82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3"
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.729956    1256 scope.go:117] "RemoveContainer" containerID="82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3"
	Dec 02 21:12:09 addons-656754 kubelet[1256]: E1202 21:12:09.731294    1256 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3\": container with ID starting with 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3 not found: ID does not exist" containerID="82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3"
	Dec 02 21:12:09 addons-656754 kubelet[1256]: I1202 21:12:09.731338    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3"} err="failed to get container status \"82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3\": rpc error: code = NotFound desc = could not find container \"82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3\": container with ID starting with 82fa35a3c59641fb0c7343951786b2e17cee9b661312ded6f943c6499f421ee3 not found: ID does not exist"
	Dec 02 21:12:10 addons-656754 kubelet[1256]: E1202 21:12:10.694045    1256 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-bgqc9" podUID="b9193e40-6002-48d5-8fce-7e6beaee342f"
	Dec 02 21:12:10 addons-656754 kubelet[1256]: I1202 21:12:10.808110    1256 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040d5f44-476b-42c5-82a6-9291342d8f5f" path="/var/lib/kubelet/pods/040d5f44-476b-42c5-82a6-9291342d8f5f/volumes"
	Dec 02 21:12:12 addons-656754 kubelet[1256]: I1202 21:12:12.598390    1256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5f29\" (UniqueName: \"kubernetes.io/projected/5aa63b05-401e-4b7e-8f3e-cfe64abb8b17-kube-api-access-j5f29\") pod \"task-pv-pod-restore\" (UID: \"5aa63b05-401e-4b7e-8f3e-cfe64abb8b17\") " pod="default/task-pv-pod-restore"
	Dec 02 21:12:12 addons-656754 kubelet[1256]: I1202 21:12:12.598457    1256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-622cb05e-f490-4f08-8563-f8943a375c3c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^91d943a9-cfc3-11f0-a1a6-12fd298fba74\") pod \"task-pv-pod-restore\" (UID: \"5aa63b05-401e-4b7e-8f3e-cfe64abb8b17\") " pod="default/task-pv-pod-restore"
	Dec 02 21:12:12 addons-656754 kubelet[1256]: I1202 21:12:12.598516    1256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5aa63b05-401e-4b7e-8f3e-cfe64abb8b17-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"5aa63b05-401e-4b7e-8f3e-cfe64abb8b17\") " pod="default/task-pv-pod-restore"
	Dec 02 21:12:12 addons-656754 kubelet[1256]: I1202 21:12:12.731547    1256 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-622cb05e-f490-4f08-8563-f8943a375c3c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^91d943a9-cfc3-11f0-a1a6-12fd298fba74\") pod \"task-pv-pod-restore\" (UID: \"5aa63b05-401e-4b7e-8f3e-cfe64abb8b17\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/a38f06426753af795f01487ca0232fe5b40e0343a53560ad0184974f6e6f8b9b/globalmount\"" pod="default/task-pv-pod-restore"
	Dec 02 21:12:12 addons-656754 kubelet[1256]: I1202 21:12:12.804837    1256 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2zlcv" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 21:12:12 addons-656754 kubelet[1256]: W1202 21:12:12.856320    1256 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/efe0c78f1497d744a6545c09c3401b58b4766f6eec05267ac8e14285c9373036/crio-97f4c84119e58d05a555c23c6fe0f16ee8d60db9b3c554fec817d0772638ec64 WatchSource:0}: Error finding container 97f4c84119e58d05a555c23c6fe0f16ee8d60db9b3c554fec817d0772638ec64: Status 404 returned error can't find the container with id 97f4c84119e58d05a555c23c6fe0f16ee8d60db9b3c554fec817d0772638ec64
	
	
	==> storage-provisioner [507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e] <==
	W1202 21:11:48.935551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:50.938742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:50.950231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:52.953726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:52.958680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:54.962294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:54.973541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:56.977791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:56.985182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:58.998181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:11:59.005713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:01.008918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:01.017071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:03.020029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:03.025018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:05.032746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:05.051379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:07.058444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:07.064265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:09.068205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:09.074993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:11.078859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:11.085196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:13.088877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:12:13.093484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-656754 -n addons-656754
helpers_test.go:269: (dbg) Run:  kubectl --context addons-656754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb registry-creds-764b6fb674-bgqc9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-656754 describe pod ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb registry-creds-764b6fb674-bgqc9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-656754 describe pod ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb registry-creds-764b6fb674-bgqc9: exit status 1 (83.733637ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mt6ld" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2fnsb" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-bgqc9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-656754 describe pod ingress-nginx-admission-create-mt6ld ingress-nginx-admission-patch-2fnsb registry-creds-764b6fb674-bgqc9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable headlamp --alsologtostderr -v=1: exit status 11 (268.3134ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:14.699178  455930 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:14.700025  455930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:14.700045  455930 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:14.700051  455930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:14.700433  455930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:14.700812  455930 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:14.701267  455930 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:14.701285  455930 addons.go:622] checking whether the cluster is paused
	I1202 21:12:14.701418  455930 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:14.701436  455930 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:14.702039  455930 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:14.721297  455930 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:14.721359  455930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:14.739184  455930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:14.841894  455930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:14.841991  455930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:14.872383  455930 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:14.872401  455930 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:14.872406  455930 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:14.872410  455930 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:14.872419  455930 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:14.872423  455930 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:14.872426  455930 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:14.872430  455930 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:14.872433  455930 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:14.872440  455930 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:14.872443  455930 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:14.872446  455930 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:14.872449  455930 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:14.872452  455930 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:14.872455  455930 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:14.872462  455930 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:14.872466  455930 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:14.872470  455930 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:14.872473  455930 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:14.872476  455930 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:14.872481  455930 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:14.872484  455930 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:14.872487  455930 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:14.872490  455930 cri.go:89] found id: ""
	I1202 21:12:14.872538  455930 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:14.887386  455930 out.go:203] 
	W1202 21:12:14.890325  455930 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:14.890354  455930 out.go:285] * 
	* 
	W1202 21:12:14.896048  455930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:14.898842  455930 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-qldsf" [9cb9d412-6cac-4e86-ae88-981ff14d4a38] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004195414s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (287.907241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:11.101072  455356 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:11.101711  455356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:11.101740  455356 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:11.101761  455356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:11.102137  455356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:11.102491  455356 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:11.103616  455356 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:11.103677  455356 addons.go:622] checking whether the cluster is paused
	I1202 21:12:11.103847  455356 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:11.103880  455356 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:11.104673  455356 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:11.127359  455356 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:11.127413  455356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:11.153433  455356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:11.273606  455356 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:11.273689  455356 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:11.304618  455356 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:11.304641  455356 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:11.304646  455356 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:11.304650  455356 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:11.304654  455356 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:11.304658  455356 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:11.304662  455356 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:11.304665  455356 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:11.304669  455356 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:11.304675  455356 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:11.304679  455356 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:11.304682  455356 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:11.304685  455356 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:11.304689  455356 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:11.304692  455356 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:11.304705  455356 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:11.304708  455356 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:11.304713  455356 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:11.304716  455356 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:11.304719  455356 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:11.304728  455356 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:11.304732  455356 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:11.304743  455356 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:11.304746  455356 cri.go:89] found id: ""
	I1202 21:12:11.304797  455356 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:11.319103  455356 out.go:203] 
	W1202 21:12:11.322092  455356 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:11.322130  455356 out.go:285] * 
	* 
	W1202 21:12:11.327892  455356 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:11.330810  455356 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-656754 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-656754 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-656754 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a1699f77-7238-4ea7-8ed3-0f406a4c84c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a1699f77-7238-4ea7-8ed3-0f406a4c84c5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a1699f77-7238-4ea7-8ed3-0f406a4c84c5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003300433s
addons_test.go:967: (dbg) Run:  kubectl --context addons-656754 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 ssh "cat /opt/local-path-provisioner/pvc-3df1e97b-8903-4317-b848-7da6166c304a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-656754 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-656754 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (279.157477ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:12:04.801953  455180 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:12:04.802846  455180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:04.802886  455180 out.go:374] Setting ErrFile to fd 2...
	I1202 21:12:04.802906  455180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:12:04.803251  455180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:12:04.803654  455180 mustload.go:66] Loading cluster: addons-656754
	I1202 21:12:04.804143  455180 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:04.804201  455180 addons.go:622] checking whether the cluster is paused
	I1202 21:12:04.804375  455180 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:12:04.804426  455180 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:12:04.805260  455180 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:12:04.830449  455180 ssh_runner.go:195] Run: systemctl --version
	I1202 21:12:04.830508  455180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:12:04.849106  455180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:12:04.953697  455180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:12:04.953787  455180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:12:04.983600  455180 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:12:04.983622  455180 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:12:04.983627  455180 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:12:04.983631  455180 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:12:04.983634  455180 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:12:04.983638  455180 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:12:04.983641  455180 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:12:04.983644  455180 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:12:04.983647  455180 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:12:04.983653  455180 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:12:04.983656  455180 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:12:04.983659  455180 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:12:04.983662  455180 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:12:04.983666  455180 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:12:04.983670  455180 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:12:04.983675  455180 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:12:04.983678  455180 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:12:04.983682  455180 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:12:04.983685  455180 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:12:04.983688  455180 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:12:04.983692  455180 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:12:04.983696  455180 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:12:04.983698  455180 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:12:04.983702  455180 cri.go:89] found id: ""
	I1202 21:12:04.983757  455180 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:12:05.001846  455180 out.go:203] 
	W1202 21:12:05.007181  455180 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:12:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:12:05.007217  455180 out.go:285] * 
	* 
	W1202 21:12:05.013361  455180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:12:05.017249  455180 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (13.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gmn2x" [ed80ce0e-8b25-41e7-99a3-93d96ed803c7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00377793s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (291.873357ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:11:51.259190  454687 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:11:51.260415  454687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:51.260452  454687 out.go:374] Setting ErrFile to fd 2...
	I1202 21:11:51.260472  454687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:51.260757  454687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:11:51.261082  454687 mustload.go:66] Loading cluster: addons-656754
	I1202 21:11:51.261495  454687 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:51.261537  454687 addons.go:622] checking whether the cluster is paused
	I1202 21:11:51.261669  454687 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:51.261703  454687 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:11:51.262225  454687 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:11:51.279939  454687 ssh_runner.go:195] Run: systemctl --version
	I1202 21:11:51.280057  454687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:11:51.308760  454687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:11:51.426493  454687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:11:51.426591  454687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:11:51.467572  454687 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:11:51.467597  454687 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:11:51.467603  454687 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:11:51.467607  454687 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:11:51.467610  454687 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:11:51.467618  454687 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:11:51.467622  454687 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:11:51.467625  454687 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:11:51.467627  454687 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:11:51.467634  454687 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:11:51.467637  454687 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:11:51.467640  454687 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:11:51.467643  454687 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:11:51.467646  454687 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:11:51.467649  454687 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:11:51.467654  454687 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:11:51.467657  454687 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:11:51.467660  454687 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:11:51.467663  454687 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:11:51.467666  454687 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:11:51.467670  454687 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:11:51.467673  454687 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:11:51.467676  454687 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:11:51.467679  454687 cri.go:89] found id: ""
	I1202 21:11:51.467731  454687 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:11:51.483124  454687 out.go:203] 
	W1202 21:11:51.486083  454687 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:11:51.486110  454687 out.go:285] * 
	* 
	W1202 21:11:51.491729  454687 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:11:51.494696  454687 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-znnvc" [bce62dd6-c978-4fe4-b9ac-594dad68a2dd] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003269731s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-656754 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-656754 addons disable yakd --alsologtostderr -v=1: exit status 11 (251.230791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:11:35.296642  454352 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:11:35.297412  454352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:35.297430  454352 out.go:374] Setting ErrFile to fd 2...
	I1202 21:11:35.297438  454352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:11:35.297731  454352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:11:35.298057  454352 mustload.go:66] Loading cluster: addons-656754
	I1202 21:11:35.298471  454352 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:35.298492  454352 addons.go:622] checking whether the cluster is paused
	I1202 21:11:35.298640  454352 config.go:182] Loaded profile config "addons-656754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:11:35.298658  454352 host.go:66] Checking if "addons-656754" exists ...
	I1202 21:11:35.299269  454352 cli_runner.go:164] Run: docker container inspect addons-656754 --format={{.State.Status}}
	I1202 21:11:35.316451  454352 ssh_runner.go:195] Run: systemctl --version
	I1202 21:11:35.316508  454352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-656754
	I1202 21:11:35.333482  454352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/addons-656754/id_rsa Username:docker}
	I1202 21:11:35.437630  454352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:11:35.437715  454352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:11:35.467533  454352 cri.go:89] found id: "bbdebabfcb42b065c93c7b8c63eacd87f6d6eb46b5d1662c458f56d1c780b039"
	I1202 21:11:35.467556  454352 cri.go:89] found id: "f21eb5e720d9916c8f0d2080657cbb6ddf4224196ed874fe8708a17099333ac6"
	I1202 21:11:35.467561  454352 cri.go:89] found id: "7fb33e06679de7e08ab62a7734f3d935dc66cf175f8f8a668fd371615738c236"
	I1202 21:11:35.467565  454352 cri.go:89] found id: "2a54239c986b6497fd50d1d1a6b61bde3c2aa4df3578a220305fbbe06bb8e117"
	I1202 21:11:35.467569  454352 cri.go:89] found id: "c7438d482556e636bfccc5b4ed7365355ff5880e2350ed12cb191a42f5d5c8f5"
	I1202 21:11:35.467572  454352 cri.go:89] found id: "1047a51792cd7e48127636ca4d38573e3647de1837a0b154f917b071613a23ba"
	I1202 21:11:35.467576  454352 cri.go:89] found id: "a0bf837335fef7f546da1571f539f3031e7c3d9d8fe0bd8c01134bcf5bbb8b47"
	I1202 21:11:35.467580  454352 cri.go:89] found id: "7f1f14868c0742b8363e1adb20038f172f8079e2b350fcba40d0f618ea2b94a6"
	I1202 21:11:35.467583  454352 cri.go:89] found id: "eadd870941895991244ab395ac23aa00747d9406ad82047f6316053c064fae44"
	I1202 21:11:35.467596  454352 cri.go:89] found id: "6b9323a78a161c66a41141eb196ed9e6524ddec4724a78350ede07c96ea46b98"
	I1202 21:11:35.467600  454352 cri.go:89] found id: "346d71544b514de511426b58d76ccea64d05f011b2df7c6f1a6237efec06f727"
	I1202 21:11:35.467603  454352 cri.go:89] found id: "eac5adf21505cf4b182089cc1afe48e9ec04852f7c0cd8fba2798eb73904977d"
	I1202 21:11:35.467608  454352 cri.go:89] found id: "3e550292e137141aa3ae5a014e43d70951e3ec85dcf992a07449273844a2962e"
	I1202 21:11:35.467611  454352 cri.go:89] found id: "27c1564d21921a7e8b842a2b27c7b20f2866275d0fddcc9a15048d3ef8b011c2"
	I1202 21:11:35.467615  454352 cri.go:89] found id: "a6bae13c9272861d4b860b6f071559f6e7592880d745359834532da831451612"
	I1202 21:11:35.467626  454352 cri.go:89] found id: "8fbc644a70c186195906a3d3e4917661ef51dbd74ae7e5847e594193ea72afe3"
	I1202 21:11:35.467629  454352 cri.go:89] found id: "507385b0545f3698f9af424e25e3a4b157b16e3e1351d30bfe00bb2e7bfa454e"
	I1202 21:11:35.467633  454352 cri.go:89] found id: "6557f84007b18fe69d74c4825436691966121cb8f7a827d7bece8d73a1abf7fd"
	I1202 21:11:35.467636  454352 cri.go:89] found id: "4767c189dbb1dba640c8407458072ba9a455e751b2e96e9cd9cad13e3bc3b513"
	I1202 21:11:35.467639  454352 cri.go:89] found id: "3a609b1131be34914695fe5711760628079b8db9acb1866aea18eb047c5a9cc6"
	I1202 21:11:35.467644  454352 cri.go:89] found id: "17a3bd5107c3db921e3e3234d579c956b23786a6af5c7238fb419560c94de9c6"
	I1202 21:11:35.467647  454352 cri.go:89] found id: "870c81e888423d02aeefcd1163abe252a932c08c2cac0c3d3f13691d44c2e754"
	I1202 21:11:35.467654  454352 cri.go:89] found id: "8ccf79252e522a3b7c38458a77f3740e6035463feadf38a2cde7e9161580086d"
	I1202 21:11:35.467657  454352 cri.go:89] found id: ""
	I1202 21:11:35.467714  454352 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 21:11:35.482832  454352 out.go:203] 
	W1202 21:11:35.485767  454352 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:11:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 21:11:35.485792  454352 out.go:285] * 
	* 
	W1202 21:11:35.491533  454352 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:11:35.494459  454352 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-656754 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-218190 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-218190 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6p88f" [af52929e-09cf-4e4f-817a-ffdd19de5bc8] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-6p88f" [af52929e-09cf-4e4f-817a-ffdd19de5bc8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-218190 -n functional-218190
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 21:28:52.972321654 +0000 UTC m=+1244.643384205
functional_test.go:1645: (dbg) Run:  kubectl --context functional-218190 describe po hello-node-connect-7d85dfc575-6p88f -n default
functional_test.go:1645: (dbg) kubectl --context functional-218190 describe po hello-node-connect-7d85dfc575-6p88f -n default:
Name:             hello-node-connect-7d85dfc575-6p88f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-218190/192.168.49.2
Start Time:       Tue, 02 Dec 2025 21:18:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gzlz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6gzlz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6p88f to functional-218190
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-218190 logs hello-node-connect-7d85dfc575-6p88f -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-218190 logs hello-node-connect-7d85dfc575-6p88f -n default: exit status 1 (103.05603ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6p88f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-218190 logs hello-node-connect-7d85dfc575-6p88f -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-218190 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6p88f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-218190/192.168.49.2
Start Time:       Tue, 02 Dec 2025 21:18:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gzlz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6gzlz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6p88f to functional-218190
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-218190 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-218190 logs -l app=hello-node-connect: exit status 1 (88.370918ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6p88f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-218190 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-218190 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.0.40
IPs:                      10.108.0.40
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31839/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-218190
helpers_test.go:243: (dbg) docker inspect functional-218190:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96",
	        "Created": "2025-12-02T21:16:00.995540845Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:16:01.063344506Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96/hostname",
	        "HostsPath": "/var/lib/docker/containers/9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96/hosts",
	        "LogPath": "/var/lib/docker/containers/9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96/9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96-json.log",
	        "Name": "/functional-218190",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-218190:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-218190",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9318b945c20ec92b61a3b1f1bfe33a56f8c5928152af05f4795a79b74e20eb96",
	                "LowerDir": "/var/lib/docker/overlay2/484e01a403cec830eebfd8159b42ba9cd586195b4c7b6879088bc7706458eb08-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/484e01a403cec830eebfd8159b42ba9cd586195b4c7b6879088bc7706458eb08/merged",
	                "UpperDir": "/var/lib/docker/overlay2/484e01a403cec830eebfd8159b42ba9cd586195b4c7b6879088bc7706458eb08/diff",
	                "WorkDir": "/var/lib/docker/overlay2/484e01a403cec830eebfd8159b42ba9cd586195b4c7b6879088bc7706458eb08/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-218190",
	                "Source": "/var/lib/docker/volumes/functional-218190/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-218190",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-218190",
	                "name.minikube.sigs.k8s.io": "functional-218190",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ff066ad6d0ff2489eff09742a2c702cb6a1d85c12d582b24113903a1346a96e",
	            "SandboxKey": "/var/run/docker/netns/8ff066ad6d0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-218190": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:c1:7b:d0:ec:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4dd031e6039fd1811101e17a9651872a3773f68dda7f08fb8db9db7e6d138e14",
	                    "EndpointID": "bd94131362755e197029a5d6fe7696004194b503b40dd6029637ee6374493644",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-218190",
	                        "9318b945c20e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-218190 -n functional-218190
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 logs -n 25: (1.419234145s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-218190 ssh sudo cat /etc/ssl/certs/447211.pem                                                                                                  │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image load --daemon kicbase/echo-server:functional-218190 --alsologtostderr                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh sudo cat /usr/share/ca-certificates/447211.pem                                                                                      │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image ls                                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh sudo cat /etc/ssl/certs/4472112.pem                                                                                                 │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image load --daemon kicbase/echo-server:functional-218190 --alsologtostderr                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh sudo cat /usr/share/ca-certificates/4472112.pem                                                                                     │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image ls                                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh sudo cat /etc/test/nested/copy/447211/hosts                                                                                         │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image load --daemon kicbase/echo-server:functional-218190 --alsologtostderr                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image ls                                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image save kicbase/echo-server:functional-218190 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh echo hello                                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image rm kicbase/echo-server:functional-218190 --alsologtostderr                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ ssh     │ functional-218190 ssh cat /etc/hostname                                                                                                                   │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image ls                                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ tunnel  │ functional-218190 tunnel --alsologtostderr                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │                     │
	│ tunnel  │ functional-218190 tunnel --alsologtostderr                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │                     │
	│ image   │ functional-218190 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ image   │ functional-218190 image save --daemon kicbase/echo-server:functional-218190 --alsologtostderr                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ tunnel  │ functional-218190 tunnel --alsologtostderr                                                                                                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │                     │
	│ addons  │ functional-218190 addons list                                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	│ addons  │ functional-218190 addons list -o json                                                                                                                     │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:18 UTC │ 02 Dec 25 21:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:17:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:17:53.452208  467256 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:17:53.452328  467256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:17:53.452332  467256 out.go:374] Setting ErrFile to fd 2...
	I1202 21:17:53.452337  467256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:17:53.452615  467256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:17:53.453023  467256 out.go:368] Setting JSON to false
	I1202 21:17:53.453929  467256 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10802,"bootTime":1764699472,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:17:53.453991  467256 start.go:143] virtualization:  
	I1202 21:17:53.457507  467256 out.go:179] * [functional-218190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:17:53.461581  467256 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:17:53.461657  467256 notify.go:221] Checking for updates...
	I1202 21:17:53.467415  467256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:17:53.470273  467256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:17:53.473186  467256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:17:53.476153  467256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:17:53.479063  467256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:17:53.482445  467256 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:17:53.482537  467256 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:17:53.511696  467256 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:17:53.511792  467256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:17:53.577816  467256 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-02 21:17:53.568677762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:17:53.577905  467256 docker.go:319] overlay module found
	I1202 21:17:53.581042  467256 out.go:179] * Using the docker driver based on existing profile
	I1202 21:17:53.583987  467256 start.go:309] selected driver: docker
	I1202 21:17:53.583998  467256 start.go:927] validating driver "docker" against &{Name:functional-218190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:17:53.584097  467256 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:17:53.584204  467256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:17:53.640802  467256 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-02 21:17:53.631537653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:17:53.641258  467256 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:17:53.641286  467256 cni.go:84] Creating CNI manager for ""
	I1202 21:17:53.641342  467256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:17:53.641380  467256 start.go:353] cluster config:
	{Name:functional-218190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:17:53.644535  467256 out.go:179] * Starting "functional-218190" primary control-plane node in "functional-218190" cluster
	I1202 21:17:53.647442  467256 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:17:53.650385  467256 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:17:53.653340  467256 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:17:53.653384  467256 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 21:17:53.653424  467256 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:17:53.653462  467256 cache.go:65] Caching tarball of preloaded images
	I1202 21:17:53.653596  467256 preload.go:238] Found /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 21:17:53.653602  467256 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 21:17:53.653734  467256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/config.json ...
	I1202 21:17:53.673193  467256 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:17:53.673204  467256 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 21:17:53.673217  467256 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:17:53.673247  467256 start.go:360] acquireMachinesLock for functional-218190: {Name:mk191de055cac7b7facee357791eebc25661e73e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:17:53.673299  467256 start.go:364] duration metric: took 36.677µs to acquireMachinesLock for "functional-218190"
	I1202 21:17:53.673318  467256 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:17:53.673321  467256 fix.go:54] fixHost starting: 
	I1202 21:17:53.673585  467256 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
	I1202 21:17:53.691109  467256 fix.go:112] recreateIfNeeded on functional-218190: state=Running err=<nil>
	W1202 21:17:53.691129  467256 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:17:53.694464  467256 out.go:252] * Updating the running docker "functional-218190" container ...
	I1202 21:17:53.694492  467256 machine.go:94] provisionDockerMachine start ...
	I1202 21:17:53.694569  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:17:53.712761  467256 main.go:143] libmachine: Using SSH client type: native
	I1202 21:17:53.713091  467256 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1202 21:17:53.713097  467256 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:17:53.862533  467256 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-218190
	
	I1202 21:17:53.862547  467256 ubuntu.go:182] provisioning hostname "functional-218190"
	I1202 21:17:53.862610  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:17:53.881035  467256 main.go:143] libmachine: Using SSH client type: native
	I1202 21:17:53.881336  467256 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1202 21:17:53.881344  467256 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-218190 && echo "functional-218190" | sudo tee /etc/hostname
	I1202 21:17:54.049062  467256 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-218190
	
	I1202 21:17:54.049142  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:17:54.068258  467256 main.go:143] libmachine: Using SSH client type: native
	I1202 21:17:54.068607  467256 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1202 21:17:54.068621  467256 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-218190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-218190/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-218190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:17:54.219267  467256 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:17:54.219283  467256 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:17:54.219314  467256 ubuntu.go:190] setting up certificates
	I1202 21:17:54.219321  467256 provision.go:84] configureAuth start
	I1202 21:17:54.219380  467256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-218190
	I1202 21:17:54.236390  467256 provision.go:143] copyHostCerts
	I1202 21:17:54.236466  467256 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:17:54.236478  467256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:17:54.236558  467256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:17:54.236662  467256 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:17:54.236666  467256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:17:54.236694  467256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:17:54.236756  467256 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:17:54.236759  467256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:17:54.236781  467256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:17:54.236835  467256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-218190 san=[127.0.0.1 192.168.49.2 functional-218190 localhost minikube]
	I1202 21:17:54.513254  467256 provision.go:177] copyRemoteCerts
	I1202 21:17:54.513308  467256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:17:54.513346  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:17:54.530294  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:17:54.641604  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:17:54.661836  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:17:54.680280  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:17:54.698569  467256 provision.go:87] duration metric: took 479.234382ms to configureAuth
	I1202 21:17:54.698588  467256 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:17:54.698805  467256 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:17:54.698899  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:17:54.716759  467256 main.go:143] libmachine: Using SSH client type: native
	I1202 21:17:54.717158  467256 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1202 21:17:54.717172  467256 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:18:00.415582  467256 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:18:00.415598  467256 machine.go:97] duration metric: took 6.721098083s to provisionDockerMachine
	I1202 21:18:00.415609  467256 start.go:293] postStartSetup for "functional-218190" (driver="docker")
	I1202 21:18:00.415621  467256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:18:00.415694  467256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:18:00.415756  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:00.436493  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:00.543639  467256 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:18:00.547556  467256 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:18:00.547575  467256 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:18:00.547585  467256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:18:00.547645  467256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:18:00.547724  467256 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:18:00.547803  467256 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:18:00.547858  467256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:18:00.556943  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:18:00.575030  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:18:00.593715  467256 start.go:296] duration metric: took 178.074157ms for postStartSetup
	I1202 21:18:00.593791  467256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:18:00.593834  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:00.611206  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:00.712504  467256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:18:00.717592  467256 fix.go:56] duration metric: took 7.044254155s for fixHost
	I1202 21:18:00.717609  467256 start.go:83] releasing machines lock for "functional-218190", held for 7.044302238s
	I1202 21:18:00.717679  467256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-218190
	I1202 21:18:00.734888  467256 ssh_runner.go:195] Run: cat /version.json
	I1202 21:18:00.734930  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:00.735283  467256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:18:00.735333  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:00.756176  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:00.758833  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:00.859349  467256 ssh_runner.go:195] Run: systemctl --version
	I1202 21:18:00.954938  467256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:18:00.992007  467256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:18:00.996613  467256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:18:00.996676  467256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:18:01.006148  467256 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:18:01.006165  467256 start.go:496] detecting cgroup driver to use...
	I1202 21:18:01.006222  467256 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:18:01.006273  467256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:18:01.023138  467256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:18:01.037360  467256 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:18:01.037431  467256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:18:01.053840  467256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:18:01.067843  467256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:18:01.225383  467256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:18:01.358971  467256 docker.go:234] disabling docker service ...
	I1202 21:18:01.359062  467256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:18:01.376071  467256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:18:01.389910  467256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:18:01.533704  467256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:18:01.668005  467256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:18:01.681344  467256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:18:01.695776  467256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:18:01.695845  467256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.705066  467256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:18:01.705135  467256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.714841  467256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.724260  467256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.733306  467256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:18:01.741806  467256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.751124  467256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.761729  467256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:18:01.771047  467256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:18:01.778645  467256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:18:01.786013  467256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:18:01.922126  467256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:18:07.314036  467256 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.391886941s)
	I1202 21:18:07.314057  467256 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:18:07.314108  467256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:18:07.317924  467256 start.go:564] Will wait 60s for crictl version
	I1202 21:18:07.317978  467256 ssh_runner.go:195] Run: which crictl
	I1202 21:18:07.321387  467256 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:18:07.350781  467256 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:18:07.350870  467256 ssh_runner.go:195] Run: crio --version
	I1202 21:18:07.379583  467256 ssh_runner.go:195] Run: crio --version
	I1202 21:18:07.413041  467256 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 21:18:07.416017  467256 cli_runner.go:164] Run: docker network inspect functional-218190 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:18:07.434879  467256 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:18:07.442045  467256 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:18:07.444913  467256 kubeadm.go:884] updating cluster {Name:functional-218190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:18:07.445049  467256 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:18:07.445127  467256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:18:07.478647  467256 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:18:07.478658  467256 crio.go:433] Images already preloaded, skipping extraction
	I1202 21:18:07.478715  467256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:18:07.504226  467256 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:18:07.504237  467256 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:18:07.504244  467256 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.2 crio true true} ...
	I1202 21:18:07.504343  467256 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-218190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:18:07.504423  467256 ssh_runner.go:195] Run: crio config
	I1202 21:18:07.556648  467256 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:18:07.556667  467256 cni.go:84] Creating CNI manager for ""
	I1202 21:18:07.556676  467256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:18:07.556689  467256 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:18:07.556709  467256 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-218190 NodeName:functional-218190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:18:07.556832  467256 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-218190"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:18:07.556905  467256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 21:18:07.564301  467256 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:18:07.564368  467256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:18:07.571642  467256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1202 21:18:07.584332  467256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 21:18:07.596530  467256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1202 21:18:07.608886  467256 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:18:07.612647  467256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:18:07.741445  467256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:18:07.755969  467256 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190 for IP: 192.168.49.2
	I1202 21:18:07.755979  467256 certs.go:195] generating shared ca certs ...
	I1202 21:18:07.755995  467256 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:18:07.756140  467256 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:18:07.756181  467256 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:18:07.756187  467256 certs.go:257] generating profile certs ...
	I1202 21:18:07.756269  467256 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.key
	I1202 21:18:07.756328  467256 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/apiserver.key.bdf17f69
	I1202 21:18:07.756364  467256 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/proxy-client.key
	I1202 21:18:07.756514  467256 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:18:07.756546  467256 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:18:07.756553  467256 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:18:07.756580  467256 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:18:07.756603  467256 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:18:07.756628  467256 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:18:07.756671  467256 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:18:07.757291  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:18:07.774693  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:18:07.791833  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:18:07.809672  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:18:07.826647  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:18:07.843984  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:18:07.861113  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:18:07.878480  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:18:07.896783  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:18:07.914016  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:18:07.931381  467256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:18:07.948564  467256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:18:07.961596  467256 ssh_runner.go:195] Run: openssl version
	I1202 21:18:07.967812  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:18:07.976287  467256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:18:07.979950  467256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:15 /usr/share/ca-certificates/447211.pem
	I1202 21:18:07.980002  467256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:18:08.021252  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:18:08.029621  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:18:08.038410  467256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:18:08.042415  467256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:15 /usr/share/ca-certificates/4472112.pem
	I1202 21:18:08.042480  467256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:18:08.084543  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:18:08.096128  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:18:08.104830  467256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:18:08.108775  467256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:18:08.108832  467256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:18:08.154802  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:18:08.162703  467256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:18:08.166627  467256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:18:08.207407  467256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:18:08.248039  467256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:18:08.288848  467256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:18:08.329763  467256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:18:08.371450  467256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:18:08.412255  467256 kubeadm.go:401] StartCluster: {Name:functional-218190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:18:08.412345  467256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:18:08.412415  467256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:18:08.443377  467256 cri.go:89] found id: "82b4913f52c103abce25613a5b70261f4d284511e73c33bec6d364372fb9a5c1"
	I1202 21:18:08.443387  467256 cri.go:89] found id: "bac927666c5c2fa5fd82e7d087ac16576e9dd4425138783bc61b3a116de8a2a2"
	I1202 21:18:08.443391  467256 cri.go:89] found id: "54cfa3e00bc564db16308dcdbbdaaf7097d8320909246cb8ab229bae29dfd867"
	I1202 21:18:08.443393  467256 cri.go:89] found id: "76f8f155db498c4f9e6544c93ee77b005d408f20c7db1988ab64c180be55c615"
	I1202 21:18:08.443395  467256 cri.go:89] found id: "5994ad33759332e65287e3904b93f8f8117b47970c8165872665be0d53bc0738"
	I1202 21:18:08.443398  467256 cri.go:89] found id: "fa040bfd1e663f4ab6d85d04bc21e133954ab3c7115b149795ea4a9654a82234"
	I1202 21:18:08.443400  467256 cri.go:89] found id: "2ca345d71938d2be7806dd9f5d396868556008b6c6693317e2b1ffbd2e4895e4"
	I1202 21:18:08.443402  467256 cri.go:89] found id: "fc754cba7728bc704678fcd29d5a6b6e2b356519a6fe469766c382cfcaa90383"
	I1202 21:18:08.443405  467256 cri.go:89] found id: "fb6b4c4bf0a652fad4fa9c092fc9659f3c83b77665fcfee8dc68c77a5f672f12"
	I1202 21:18:08.443411  467256 cri.go:89] found id: "c90b755ca13839cd01b04f035a550783a2a88aea3ba558f7630418d83cb56e66"
	I1202 21:18:08.443413  467256 cri.go:89] found id: "c75d3e38451ab761b029d6d95f659e14d333968b830a71c42fd207c609e77312"
	I1202 21:18:08.443418  467256 cri.go:89] found id: "92b973d179599382d255aa23cf6d12493a171d90b7fb67131fad18b4e5f3da68"
	I1202 21:18:08.443420  467256 cri.go:89] found id: "fc0feaa9f77a4cf382debbeccd7372b702d9f0bd7625519103edb4e0c4b5fad8"
	I1202 21:18:08.443422  467256 cri.go:89] found id: "c1ef430eafdfbe43de860c3612a32392efadaee58efa350266e57799cac33d5d"
	I1202 21:18:08.443424  467256 cri.go:89] found id: "fcdd1394c5d71626bd045a3da7f5eaca0d30213a041a09e4a4f8a69b4cbfba21"
	I1202 21:18:08.443431  467256 cri.go:89] found id: "93e02ad1561f9fc45c4ddc8a04603834e17c70af712c46004c77e4154ecaea05"
	I1202 21:18:08.443433  467256 cri.go:89] found id: ""
	I1202 21:18:08.443484  467256 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 21:18:08.454736  467256 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:18:08Z" level=error msg="open /run/runc: no such file or directory"
	I1202 21:18:08.454808  467256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:18:08.462628  467256 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:18:08.462637  467256 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:18:08.462687  467256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:18:08.469888  467256 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:18:08.470418  467256 kubeconfig.go:125] found "functional-218190" server: "https://192.168.49.2:8441"
	I1202 21:18:08.471808  467256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:18:08.479600  467256 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:16:10.059429971 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:18:07.604684264 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:18:08.479611  467256 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:18:08.479622  467256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 21:18:08.479682  467256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:18:08.507886  467256 cri.go:89] found id: "82b4913f52c103abce25613a5b70261f4d284511e73c33bec6d364372fb9a5c1"
	I1202 21:18:08.507897  467256 cri.go:89] found id: "bac927666c5c2fa5fd82e7d087ac16576e9dd4425138783bc61b3a116de8a2a2"
	I1202 21:18:08.507901  467256 cri.go:89] found id: "54cfa3e00bc564db16308dcdbbdaaf7097d8320909246cb8ab229bae29dfd867"
	I1202 21:18:08.507903  467256 cri.go:89] found id: "76f8f155db498c4f9e6544c93ee77b005d408f20c7db1988ab64c180be55c615"
	I1202 21:18:08.507906  467256 cri.go:89] found id: "5994ad33759332e65287e3904b93f8f8117b47970c8165872665be0d53bc0738"
	I1202 21:18:08.507908  467256 cri.go:89] found id: "fa040bfd1e663f4ab6d85d04bc21e133954ab3c7115b149795ea4a9654a82234"
	I1202 21:18:08.507910  467256 cri.go:89] found id: "2ca345d71938d2be7806dd9f5d396868556008b6c6693317e2b1ffbd2e4895e4"
	I1202 21:18:08.507913  467256 cri.go:89] found id: "fc754cba7728bc704678fcd29d5a6b6e2b356519a6fe469766c382cfcaa90383"
	I1202 21:18:08.507915  467256 cri.go:89] found id: "fb6b4c4bf0a652fad4fa9c092fc9659f3c83b77665fcfee8dc68c77a5f672f12"
	I1202 21:18:08.507920  467256 cri.go:89] found id: "c90b755ca13839cd01b04f035a550783a2a88aea3ba558f7630418d83cb56e66"
	I1202 21:18:08.507923  467256 cri.go:89] found id: "c75d3e38451ab761b029d6d95f659e14d333968b830a71c42fd207c609e77312"
	I1202 21:18:08.507925  467256 cri.go:89] found id: "92b973d179599382d255aa23cf6d12493a171d90b7fb67131fad18b4e5f3da68"
	I1202 21:18:08.507927  467256 cri.go:89] found id: "fc0feaa9f77a4cf382debbeccd7372b702d9f0bd7625519103edb4e0c4b5fad8"
	I1202 21:18:08.507929  467256 cri.go:89] found id: "c1ef430eafdfbe43de860c3612a32392efadaee58efa350266e57799cac33d5d"
	I1202 21:18:08.507931  467256 cri.go:89] found id: "fcdd1394c5d71626bd045a3da7f5eaca0d30213a041a09e4a4f8a69b4cbfba21"
	I1202 21:18:08.507935  467256 cri.go:89] found id: "93e02ad1561f9fc45c4ddc8a04603834e17c70af712c46004c77e4154ecaea05"
	I1202 21:18:08.507937  467256 cri.go:89] found id: ""
	I1202 21:18:08.507942  467256 cri.go:252] Stopping containers: [82b4913f52c103abce25613a5b70261f4d284511e73c33bec6d364372fb9a5c1 bac927666c5c2fa5fd82e7d087ac16576e9dd4425138783bc61b3a116de8a2a2 54cfa3e00bc564db16308dcdbbdaaf7097d8320909246cb8ab229bae29dfd867 76f8f155db498c4f9e6544c93ee77b005d408f20c7db1988ab64c180be55c615 5994ad33759332e65287e3904b93f8f8117b47970c8165872665be0d53bc0738 fa040bfd1e663f4ab6d85d04bc21e133954ab3c7115b149795ea4a9654a82234 2ca345d71938d2be7806dd9f5d396868556008b6c6693317e2b1ffbd2e4895e4 fc754cba7728bc704678fcd29d5a6b6e2b356519a6fe469766c382cfcaa90383 fb6b4c4bf0a652fad4fa9c092fc9659f3c83b77665fcfee8dc68c77a5f672f12 c90b755ca13839cd01b04f035a550783a2a88aea3ba558f7630418d83cb56e66 c75d3e38451ab761b029d6d95f659e14d333968b830a71c42fd207c609e77312 92b973d179599382d255aa23cf6d12493a171d90b7fb67131fad18b4e5f3da68 fc0feaa9f77a4cf382debbeccd7372b702d9f0bd7625519103edb4e0c4b5fad8 c1ef430eafdfbe43de860c3612a32392efadaee58efa350266e57799cac33d5d fcdd1394c5d71626bd045a3da7f5eaca0d30213a0
41a09e4a4f8a69b4cbfba21 93e02ad1561f9fc45c4ddc8a04603834e17c70af712c46004c77e4154ecaea05]
	I1202 21:18:08.508007  467256 ssh_runner.go:195] Run: which crictl
	I1202 21:18:08.511827  467256 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 82b4913f52c103abce25613a5b70261f4d284511e73c33bec6d364372fb9a5c1 bac927666c5c2fa5fd82e7d087ac16576e9dd4425138783bc61b3a116de8a2a2 54cfa3e00bc564db16308dcdbbdaaf7097d8320909246cb8ab229bae29dfd867 76f8f155db498c4f9e6544c93ee77b005d408f20c7db1988ab64c180be55c615 5994ad33759332e65287e3904b93f8f8117b47970c8165872665be0d53bc0738 fa040bfd1e663f4ab6d85d04bc21e133954ab3c7115b149795ea4a9654a82234 2ca345d71938d2be7806dd9f5d396868556008b6c6693317e2b1ffbd2e4895e4 fc754cba7728bc704678fcd29d5a6b6e2b356519a6fe469766c382cfcaa90383 fb6b4c4bf0a652fad4fa9c092fc9659f3c83b77665fcfee8dc68c77a5f672f12 c90b755ca13839cd01b04f035a550783a2a88aea3ba558f7630418d83cb56e66 c75d3e38451ab761b029d6d95f659e14d333968b830a71c42fd207c609e77312 92b973d179599382d255aa23cf6d12493a171d90b7fb67131fad18b4e5f3da68 fc0feaa9f77a4cf382debbeccd7372b702d9f0bd7625519103edb4e0c4b5fad8 c1ef430eafdfbe43de860c3612a32392efadaee58efa350266e57799cac33d5d fcdd13
94c5d71626bd045a3da7f5eaca0d30213a041a09e4a4f8a69b4cbfba21 93e02ad1561f9fc45c4ddc8a04603834e17c70af712c46004c77e4154ecaea05
	I1202 21:18:08.616421  467256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:18:08.746334  467256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:18:08.755754  467256 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Dec  2 21:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 21:16 /etc/kubernetes/scheduler.conf
	
	I1202 21:18:08.755809  467256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:18:08.763693  467256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:18:08.771327  467256 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:18:08.771391  467256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:18:08.778462  467256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:18:08.785839  467256 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:18:08.785891  467256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:18:08.793278  467256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:18:08.800797  467256 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:18:08.800852  467256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:18:08.807983  467256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:18:08.815512  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:18:08.862401  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:18:10.220276  467256 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.357851808s)
	I1202 21:18:10.220352  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:18:10.439648  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:18:10.502423  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:18:10.569268  467256 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:18:10.569338  467256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.070095  467256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.570371  467256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.587410  467256 api_server.go:72] duration metric: took 1.018140396s to wait for apiserver process to appear ...
	I1202 21:18:11.587426  467256 api_server.go:88] waiting for apiserver healthz status ...
	I1202 21:18:11.587443  467256 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 21:18:15.144158  467256 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 21:18:15.144174  467256 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 21:18:15.144185  467256 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 21:18:15.220264  467256 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 21:18:15.220289  467256 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 21:18:15.587764  467256 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 21:18:15.599793  467256 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 21:18:15.599813  467256 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 21:18:16.088167  467256 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 21:18:16.096271  467256 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 21:18:16.096287  467256 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 21:18:16.587865  467256 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 21:18:16.596073  467256 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1202 21:18:16.609629  467256 api_server.go:141] control plane version: v1.34.2
	I1202 21:18:16.609647  467256 api_server.go:131] duration metric: took 5.022215361s to wait for apiserver health ...
	I1202 21:18:16.609654  467256 cni.go:84] Creating CNI manager for ""
	I1202 21:18:16.609660  467256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:18:16.613091  467256 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 21:18:16.616136  467256 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 21:18:16.620773  467256 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 21:18:16.620782  467256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 21:18:16.634551  467256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 21:18:17.088025  467256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 21:18:17.092089  467256 system_pods.go:59] 8 kube-system pods found
	I1202 21:18:17.092110  467256 system_pods.go:61] "coredns-66bc5c9577-nxfxl" [cd0a5336-15f8-450d-81d4-9b41d9bf103b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:18:17.092116  467256 system_pods.go:61] "etcd-functional-218190" [8eb56ad5-b4dc-48f3-b844-dc3d9b99f0d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 21:18:17.092121  467256 system_pods.go:61] "kindnet-4nthc" [0843cca5-eb04-4d47-9939-733166116a74] Running
	I1202 21:18:17.092126  467256 system_pods.go:61] "kube-apiserver-functional-218190" [d9113547-6cbf-4886-9746-64acecdba13a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 21:18:17.092132  467256 system_pods.go:61] "kube-controller-manager-functional-218190" [483f501e-3c9c-4640-87b5-690572edc78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 21:18:17.092136  467256 system_pods.go:61] "kube-proxy-sdl9j" [4de3bcb8-007b-4f88-b774-ebe2f88cacd9] Running
	I1202 21:18:17.092142  467256 system_pods.go:61] "kube-scheduler-functional-218190" [6171c7a0-2ba3-4f1f-9416-c3d811625769] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 21:18:17.092145  467256 system_pods.go:61] "storage-provisioner" [1b27243c-04f5-4bd3-8e4c-5e043501d2e3] Running
	I1202 21:18:17.092150  467256 system_pods.go:74] duration metric: took 4.115301ms to wait for pod list to return data ...
	I1202 21:18:17.092156  467256 node_conditions.go:102] verifying NodePressure condition ...
	I1202 21:18:17.096704  467256 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 21:18:17.096723  467256 node_conditions.go:123] node cpu capacity is 2
	I1202 21:18:17.096734  467256 node_conditions.go:105] duration metric: took 4.57403ms to run NodePressure ...
	I1202 21:18:17.096794  467256 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:18:17.356854  467256 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1202 21:18:17.361079  467256 kubeadm.go:744] kubelet initialised
	I1202 21:18:17.361089  467256 kubeadm.go:745] duration metric: took 4.222527ms waiting for restarted kubelet to initialise ...
	I1202 21:18:17.361105  467256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 21:18:17.370955  467256 ops.go:34] apiserver oom_adj: -16
	I1202 21:18:17.370967  467256 kubeadm.go:602] duration metric: took 8.908324157s to restartPrimaryControlPlane
	I1202 21:18:17.370975  467256 kubeadm.go:403] duration metric: took 8.9587305s to StartCluster
	I1202 21:18:17.370989  467256 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:18:17.371064  467256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:18:17.371651  467256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:18:17.371857  467256 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:18:17.372189  467256 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:18:17.372239  467256 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:18:17.372355  467256 addons.go:70] Setting storage-provisioner=true in profile "functional-218190"
	I1202 21:18:17.372369  467256 addons.go:239] Setting addon storage-provisioner=true in "functional-218190"
	W1202 21:18:17.372374  467256 addons.go:248] addon storage-provisioner should already be in state true
	I1202 21:18:17.372397  467256 host.go:66] Checking if "functional-218190" exists ...
	I1202 21:18:17.372861  467256 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
	I1202 21:18:17.372995  467256 addons.go:70] Setting default-storageclass=true in profile "functional-218190"
	I1202 21:18:17.373006  467256 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-218190"
	I1202 21:18:17.373327  467256 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
	I1202 21:18:17.375801  467256 out.go:179] * Verifying Kubernetes components...
	I1202 21:18:17.378891  467256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:18:17.411668  467256 addons.go:239] Setting addon default-storageclass=true in "functional-218190"
	W1202 21:18:17.411680  467256 addons.go:248] addon default-storageclass should already be in state true
	I1202 21:18:17.411701  467256 host.go:66] Checking if "functional-218190" exists ...
	I1202 21:18:17.412104  467256 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
	I1202 21:18:17.417077  467256 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:18:17.420196  467256 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:18:17.420208  467256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:18:17.420277  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:17.454671  467256 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:18:17.454684  467256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:18:17.454746  467256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:17.475255  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:17.492853  467256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:17.620037  467256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:18:17.639289  467256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:18:17.647692  467256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:18:18.426634  467256 node_ready.go:35] waiting up to 6m0s for node "functional-218190" to be "Ready" ...
	I1202 21:18:18.429197  467256 node_ready.go:49] node "functional-218190" is "Ready"
	I1202 21:18:18.429213  467256 node_ready.go:38] duration metric: took 2.562029ms for node "functional-218190" to be "Ready" ...
	I1202 21:18:18.429224  467256 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:18:18.429282  467256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:18.437451  467256 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 21:18:18.440302  467256 addons.go:530] duration metric: took 1.068066789s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 21:18:18.441901  467256 api_server.go:72] duration metric: took 1.070021444s to wait for apiserver process to appear ...
	I1202 21:18:18.441922  467256 api_server.go:88] waiting for apiserver healthz status ...
	I1202 21:18:18.441940  467256 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 21:18:18.451904  467256 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1202 21:18:18.452935  467256 api_server.go:141] control plane version: v1.34.2
	I1202 21:18:18.452948  467256 api_server.go:131] duration metric: took 11.020227ms to wait for apiserver health ...
	I1202 21:18:18.452955  467256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 21:18:18.456177  467256 system_pods.go:59] 8 kube-system pods found
	I1202 21:18:18.456196  467256 system_pods.go:61] "coredns-66bc5c9577-nxfxl" [cd0a5336-15f8-450d-81d4-9b41d9bf103b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:18:18.456203  467256 system_pods.go:61] "etcd-functional-218190" [8eb56ad5-b4dc-48f3-b844-dc3d9b99f0d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 21:18:18.456207  467256 system_pods.go:61] "kindnet-4nthc" [0843cca5-eb04-4d47-9939-733166116a74] Running
	I1202 21:18:18.456213  467256 system_pods.go:61] "kube-apiserver-functional-218190" [d9113547-6cbf-4886-9746-64acecdba13a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 21:18:18.456218  467256 system_pods.go:61] "kube-controller-manager-functional-218190" [483f501e-3c9c-4640-87b5-690572edc78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 21:18:18.456221  467256 system_pods.go:61] "kube-proxy-sdl9j" [4de3bcb8-007b-4f88-b774-ebe2f88cacd9] Running
	I1202 21:18:18.456226  467256 system_pods.go:61] "kube-scheduler-functional-218190" [6171c7a0-2ba3-4f1f-9416-c3d811625769] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 21:18:18.456228  467256 system_pods.go:61] "storage-provisioner" [1b27243c-04f5-4bd3-8e4c-5e043501d2e3] Running
	I1202 21:18:18.456233  467256 system_pods.go:74] duration metric: took 3.274004ms to wait for pod list to return data ...
	I1202 21:18:18.456239  467256 default_sa.go:34] waiting for default service account to be created ...
	I1202 21:18:18.458728  467256 default_sa.go:45] found service account: "default"
	I1202 21:18:18.458745  467256 default_sa.go:55] duration metric: took 2.496609ms for default service account to be created ...
	I1202 21:18:18.458753  467256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 21:18:18.461503  467256 system_pods.go:86] 8 kube-system pods found
	I1202 21:18:18.461520  467256 system_pods.go:89] "coredns-66bc5c9577-nxfxl" [cd0a5336-15f8-450d-81d4-9b41d9bf103b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 21:18:18.461527  467256 system_pods.go:89] "etcd-functional-218190" [8eb56ad5-b4dc-48f3-b844-dc3d9b99f0d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 21:18:18.461531  467256 system_pods.go:89] "kindnet-4nthc" [0843cca5-eb04-4d47-9939-733166116a74] Running
	I1202 21:18:18.461537  467256 system_pods.go:89] "kube-apiserver-functional-218190" [d9113547-6cbf-4886-9746-64acecdba13a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 21:18:18.461542  467256 system_pods.go:89] "kube-controller-manager-functional-218190" [483f501e-3c9c-4640-87b5-690572edc78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 21:18:18.461545  467256 system_pods.go:89] "kube-proxy-sdl9j" [4de3bcb8-007b-4f88-b774-ebe2f88cacd9] Running
	I1202 21:18:18.461551  467256 system_pods.go:89] "kube-scheduler-functional-218190" [6171c7a0-2ba3-4f1f-9416-c3d811625769] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 21:18:18.461554  467256 system_pods.go:89] "storage-provisioner" [1b27243c-04f5-4bd3-8e4c-5e043501d2e3] Running
	I1202 21:18:18.461559  467256 system_pods.go:126] duration metric: took 2.801859ms to wait for k8s-apps to be running ...
	I1202 21:18:18.461566  467256 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 21:18:18.461626  467256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:18:18.475047  467256 system_svc.go:56] duration metric: took 13.425126ms WaitForService to wait for kubelet
	I1202 21:18:18.475073  467256 kubeadm.go:587] duration metric: took 1.103195546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:18:18.475090  467256 node_conditions.go:102] verifying NodePressure condition ...
	I1202 21:18:18.477961  467256 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 21:18:18.477984  467256 node_conditions.go:123] node cpu capacity is 2
	I1202 21:18:18.477993  467256 node_conditions.go:105] duration metric: took 2.89932ms to run NodePressure ...
	I1202 21:18:18.478004  467256 start.go:242] waiting for startup goroutines ...
	I1202 21:18:18.478011  467256 start.go:247] waiting for cluster config update ...
	I1202 21:18:18.478020  467256 start.go:256] writing updated cluster config ...
	I1202 21:18:18.478338  467256 ssh_runner.go:195] Run: rm -f paused
	I1202 21:18:18.481976  467256 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 21:18:18.485345  467256 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nxfxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 21:18:20.490692  467256 pod_ready.go:104] pod "coredns-66bc5c9577-nxfxl" is not "Ready", error: <nil>
	W1202 21:18:22.490963  467256 pod_ready.go:104] pod "coredns-66bc5c9577-nxfxl" is not "Ready", error: <nil>
	W1202 21:18:24.491058  467256 pod_ready.go:104] pod "coredns-66bc5c9577-nxfxl" is not "Ready", error: <nil>
	I1202 21:18:25.996591  467256 pod_ready.go:94] pod "coredns-66bc5c9577-nxfxl" is "Ready"
	I1202 21:18:25.996606  467256 pod_ready.go:86] duration metric: took 7.511249317s for pod "coredns-66bc5c9577-nxfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.000292  467256 pod_ready.go:83] waiting for pod "etcd-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.009054  467256 pod_ready.go:94] pod "etcd-functional-218190" is "Ready"
	I1202 21:18:26.009070  467256 pod_ready.go:86] duration metric: took 8.763686ms for pod "etcd-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.011830  467256 pod_ready.go:83] waiting for pod "kube-apiserver-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.017321  467256 pod_ready.go:94] pod "kube-apiserver-functional-218190" is "Ready"
	I1202 21:18:26.017336  467256 pod_ready.go:86] duration metric: took 5.492317ms for pod "kube-apiserver-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.020296  467256 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.188938  467256 pod_ready.go:94] pod "kube-controller-manager-functional-218190" is "Ready"
	I1202 21:18:26.188952  467256 pod_ready.go:86] duration metric: took 168.642585ms for pod "kube-controller-manager-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.389873  467256 pod_ready.go:83] waiting for pod "kube-proxy-sdl9j" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.788964  467256 pod_ready.go:94] pod "kube-proxy-sdl9j" is "Ready"
	I1202 21:18:26.788978  467256 pod_ready.go:86] duration metric: took 399.091353ms for pod "kube-proxy-sdl9j" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:26.990653  467256 pod_ready.go:83] waiting for pod "kube-scheduler-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:27.389605  467256 pod_ready.go:94] pod "kube-scheduler-functional-218190" is "Ready"
	I1202 21:18:27.389618  467256 pod_ready.go:86] duration metric: took 398.952405ms for pod "kube-scheduler-functional-218190" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 21:18:27.389629  467256 pod_ready.go:40] duration metric: took 8.907632634s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 21:18:27.445261  467256 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 21:18:27.448579  467256 out.go:179] * Done! kubectl is now configured to use "functional-218190" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 21:19:07 functional-218190 crio[3534]: time="2025-12-02T21:19:07.685801467Z" level=info msg="Checking pod default_hello-node-75c85bcc94-t8mwf for CNI network kindnet (type=ptp)"
	Dec 02 21:19:07 functional-218190 crio[3534]: time="2025-12-02T21:19:07.688538923Z" level=info msg="Ran pod sandbox 04f4d8593b5741fcaa3d319973ba960844d4790cb3acb60c7a9ed0a7e460199d with infra container: default/hello-node-75c85bcc94-t8mwf/POD" id=c7d333f6-ed73-4103-827e-e4ee76bbb8e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 21:19:07 functional-218190 crio[3534]: time="2025-12-02T21:19:07.690275623Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ad3d40c7-e984-4f1c-856e-af7fccccb7d6 name=/runtime.v1.ImageService/PullImage
	Dec 02 21:19:08 functional-218190 crio[3534]: time="2025-12-02T21:19:08.617562699Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d97dd1c1-87df-4a6f-99fc-eefa1263399d name=/runtime.v1.ImageService/PullImage
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.750803373Z" level=info msg="Stopping pod sandbox: bfddb958ccfb4b70728029b6c13fc3d87510cef27e1e6e3d9469dc00d18619ea" id=b24a05f4-02a5-4c8b-bfbf-3d970c638f50 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.750878139Z" level=info msg="Stopped pod sandbox (already stopped): bfddb958ccfb4b70728029b6c13fc3d87510cef27e1e6e3d9469dc00d18619ea" id=b24a05f4-02a5-4c8b-bfbf-3d970c638f50 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.751402255Z" level=info msg="Removing pod sandbox: bfddb958ccfb4b70728029b6c13fc3d87510cef27e1e6e3d9469dc00d18619ea" id=a64d6f6e-9745-47ae-8d65-c39d8ee9dc91 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.755960309Z" level=info msg="Removed pod sandbox: bfddb958ccfb4b70728029b6c13fc3d87510cef27e1e6e3d9469dc00d18619ea" id=a64d6f6e-9745-47ae-8d65-c39d8ee9dc91 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.7594865Z" level=info msg="Stopping pod sandbox: f82baa55a4020c805f245ce432084d31b186d443f871c349923ac9372f3a5e39" id=020af6e6-c37e-4844-b36d-a26944460269 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.759544216Z" level=info msg="Stopped pod sandbox (already stopped): f82baa55a4020c805f245ce432084d31b186d443f871c349923ac9372f3a5e39" id=020af6e6-c37e-4844-b36d-a26944460269 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.761728526Z" level=info msg="Removing pod sandbox: f82baa55a4020c805f245ce432084d31b186d443f871c349923ac9372f3a5e39" id=56b78658-65f7-49ba-a1e7-1de84e32ac5b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.765685876Z" level=info msg="Removed pod sandbox: f82baa55a4020c805f245ce432084d31b186d443f871c349923ac9372f3a5e39" id=56b78658-65f7-49ba-a1e7-1de84e32ac5b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.766932256Z" level=info msg="Stopping pod sandbox: 87a89d656ba5bb9508102ab31249ca43c98cd34305b419e371998903e672cf21" id=eb9416d6-46cb-47ca-86e0-ef22d6be174d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.766972733Z" level=info msg="Stopped pod sandbox (already stopped): 87a89d656ba5bb9508102ab31249ca43c98cd34305b419e371998903e672cf21" id=eb9416d6-46cb-47ca-86e0-ef22d6be174d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.76916876Z" level=info msg="Removing pod sandbox: 87a89d656ba5bb9508102ab31249ca43c98cd34305b419e371998903e672cf21" id=9532ea9b-e13b-4d0a-9072-ce85823b0bb7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 21:19:10 functional-218190 crio[3534]: time="2025-12-02T21:19:10.777202854Z" level=info msg="Removed pod sandbox: 87a89d656ba5bb9508102ab31249ca43c98cd34305b419e371998903e672cf21" id=9532ea9b-e13b-4d0a-9072-ce85823b0bb7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 21:19:19 functional-218190 crio[3534]: time="2025-12-02T21:19:19.617216209Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=83ffe9e6-39cc-47b6-be61-3576ec20761f name=/runtime.v1.ImageService/PullImage
	Dec 02 21:19:32 functional-218190 crio[3534]: time="2025-12-02T21:19:32.617792107Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=39afdd27-c9ca-4af7-9daf-385b5bc72e1e name=/runtime.v1.ImageService/PullImage
	Dec 02 21:19:47 functional-218190 crio[3534]: time="2025-12-02T21:19:47.617256202Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=22b3e6d7-20b1-4b12-8c9d-0f8e9f46909b name=/runtime.v1.ImageService/PullImage
	Dec 02 21:20:19 functional-218190 crio[3534]: time="2025-12-02T21:20:19.617156204Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=892b78d7-94f5-4127-83fc-a9908fa9e6b1 name=/runtime.v1.ImageService/PullImage
	Dec 02 21:20:28 functional-218190 crio[3534]: time="2025-12-02T21:20:28.616929552Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=542f8a64-acce-40fc-9e40-7c2b7452170d name=/runtime.v1.ImageService/PullImage
	Dec 02 21:21:45 functional-218190 crio[3534]: time="2025-12-02T21:21:45.616631523Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2ca756b5-fc15-4795-a265-359cb4c12d91 name=/runtime.v1.ImageService/PullImage
	Dec 02 21:21:57 functional-218190 crio[3534]: time="2025-12-02T21:21:57.617127954Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=611bbc0d-dd23-4af2-b4f0-eed95ae9f6f0 name=/runtime.v1.ImageService/PullImage
	Dec 02 21:24:35 functional-218190 crio[3534]: time="2025-12-02T21:24:35.616962939Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6480ffb8-df4e-4c33-862e-972b1ad5d434 name=/runtime.v1.ImageService/PullImage
	Dec 02 21:24:38 functional-218190 crio[3534]: time="2025-12-02T21:24:38.617076346Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d86ef4e6-ec30-4718-8f4a-62f41553fa4f name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b0018304f38b2       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   f768fa720f745       sp-pod                                      default
	e34e8b7626475       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   eea5232d164a1       nginx-svc                                   default
	cc98efa4cdc54       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   5694a687ff24f       coredns-66bc5c9577-nxfxl                    kube-system
	30b74c0d40557       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   8fb0b50c3c164       kindnet-4nthc                               kube-system
	03a913b7cac5a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  10 minutes ago      Running             kube-proxy                2                   0b4faef84a4e3       kube-proxy-sdl9j                            kube-system
	7b0bbc346328d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   b1b7ce5f92e6b       storage-provisioner                         kube-system
	1f523702a8c16       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                  10 minutes ago      Running             kube-apiserver            0                   94478d25038df       kube-apiserver-functional-218190            kube-system
	30e72d8cf7c2c       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  10 minutes ago      Running             kube-scheduler            2                   413a5a6c4c44a       kube-scheduler-functional-218190            kube-system
	56d84b80caf61       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  10 minutes ago      Running             kube-controller-manager   2                   08af24bb2b83c       kube-controller-manager-functional-218190   kube-system
	f57680fbf079a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  10 minutes ago      Running             etcd                      2                   e781097453fcb       etcd-functional-218190                      kube-system
	82b4913f52c10       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  11 minutes ago      Exited              kube-controller-manager   1                   08af24bb2b83c       kube-controller-manager-functional-218190   kube-system
	bac927666c5c2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  11 minutes ago      Exited              kube-proxy                1                   0b4faef84a4e3       kube-proxy-sdl9j                            kube-system
	54cfa3e00bc56       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   8fb0b50c3c164       kindnet-4nthc                               kube-system
	76f8f155db498       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   b1b7ce5f92e6b       storage-provisioner                         kube-system
	5994ad3375933       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   5694a687ff24f       coredns-66bc5c9577-nxfxl                    kube-system
	fa040bfd1e663       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  11 minutes ago      Exited              etcd                      1                   e781097453fcb       etcd-functional-218190                      kube-system
	2ca345d71938d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  11 minutes ago      Exited              kube-scheduler            1                   413a5a6c4c44a       kube-scheduler-functional-218190            kube-system
	
	
	==> coredns [5994ad33759332e65287e3904b93f8f8117b47970c8165872665be0d53bc0738] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37306 - 11396 "HINFO IN 6730002705827979655.5960236833480806162. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011640264s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc98efa4cdc54d714ace90a2db7760cb528d3aaeaf3caa387a380eca061e6237] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57629 - 26080 "HINFO IN 562481537457692229.8640718491662996232. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034512283s
	
	
	==> describe nodes <==
	Name:               functional-218190
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-218190
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=functional-218190
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T21_16_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 21:16:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-218190
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 21:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 21:28:48 +0000   Tue, 02 Dec 2025 21:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 21:28:48 +0000   Tue, 02 Dec 2025 21:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 21:28:48 +0000   Tue, 02 Dec 2025 21:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 21:28:48 +0000   Tue, 02 Dec 2025 21:17:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-218190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                bfe5c894-9e79-44ab-8142-db65976f56bd
	  Boot ID:                    c77b83b8-287c-4d91-bf3a-e2991f41400e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-t8mwf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-6p88f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-nxfxl                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-218190                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-4nthc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-218190             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-218190    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-sdl9j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-218190             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-218190 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-218190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-218190 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-218190 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-218190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-218190 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-218190 event: Registered Node functional-218190 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-218190 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-218190 event: Registered Node functional-218190 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-218190 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-218190 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-218190 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-218190 event: Registered Node functional-218190 in Controller
	
	
	==> dmesg <==
	[Dec 2 18:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503934] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f57680fbf079a110e95ce3d9daed57b9e45b61abe88a2f36dc5bc71d8729929e] <==
	{"level":"warn","ts":"2025-12-02T21:18:14.182033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.196234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.224561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.234239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.248368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.264227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.289255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.304455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.338357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.343802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.361430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.378332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.393974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.419161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.443410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.452100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.468031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.489554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.520792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.529976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.544751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:18:14.595181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35312","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T21:28:13.021152Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1134}
	{"level":"info","ts":"2025-12-02T21:28:13.044220Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1134,"took":"22.742991ms","hash":317537219,"current-db-size-bytes":3260416,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1466368,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-12-02T21:28:13.044279Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":317537219,"revision":1134,"compact-revision":-1}
	
	
	==> etcd [fa040bfd1e663f4ab6d85d04bc21e133954ab3c7115b149795ea4a9654a82234] <==
	{"level":"warn","ts":"2025-12-02T21:17:28.091833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:17:28.108381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:17:28.141297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:17:28.184817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:17:28.201705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:17:28.218715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T21:17:28.297319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34312","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T21:17:54.888798Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T21:17:54.888846Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-218190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T21:17:54.888956Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T21:17:55.043291Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T21:17:55.043351Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T21:17:55.043394Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T21:17:55.043497Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-02T21:17:55.043514Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T21:17:55.043552Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T21:17:55.043562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T21:17:55.043536Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T21:17:55.043603Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T21:17:55.043622Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T21:17:55.043629Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T21:17:55.047477Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T21:17:55.047574Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T21:17:55.047607Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T21:17:55.047622Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-218190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 21:28:54 up  3:11,  0 user,  load average: 0.04, 0.32, 0.89
	Linux functional-218190 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [30b74c0d40557a0800e42599d39c6c8848b05de4d7ab1db75cf3a58df4f1c91e] <==
	I1202 21:26:46.313536       1 main.go:301] handling current node
	I1202 21:26:56.320644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:26:56.320680       1 main.go:301] handling current node
	I1202 21:27:06.315806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:27:06.315843       1 main.go:301] handling current node
	I1202 21:27:16.313024       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:27:16.313148       1 main.go:301] handling current node
	I1202 21:27:26.313972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:27:26.314010       1 main.go:301] handling current node
	I1202 21:27:36.313454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:27:36.313504       1 main.go:301] handling current node
	I1202 21:27:46.317770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:27:46.317879       1 main.go:301] handling current node
	I1202 21:27:56.317566       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:27:56.317601       1 main.go:301] handling current node
	I1202 21:28:06.317813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:28:06.317848       1 main.go:301] handling current node
	I1202 21:28:16.313182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:28:16.313311       1 main.go:301] handling current node
	I1202 21:28:26.319854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:28:26.319893       1 main.go:301] handling current node
	I1202 21:28:36.313490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:28:36.313529       1 main.go:301] handling current node
	I1202 21:28:46.316836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:28:46.316872       1 main.go:301] handling current node
	
	
	==> kindnet [54cfa3e00bc564db16308dcdbbdaaf7097d8320909246cb8ab229bae29dfd867] <==
	I1202 21:17:25.491914       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 21:17:25.503345       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 21:17:25.503585       1 main.go:148] setting mtu 1500 for CNI 
	I1202 21:17:25.503629       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 21:17:25.503673       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T21:17:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 21:17:25.720063       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 21:17:25.720090       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 21:17:25.720100       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 21:17:25.720391       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 21:17:29.820824       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 21:17:29.820876       1 metrics.go:72] Registering metrics
	I1202 21:17:29.820945       1 controller.go:711] "Syncing nftables rules"
	I1202 21:17:35.708566       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:17:35.708609       1 main.go:301] handling current node
	I1202 21:17:45.708259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 21:17:45.708328       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1f523702a8c1692132f6502e68de27c2d540e7ae466e77e12b78f0cb889f30bd] <==
	I1202 21:18:15.345178       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 21:18:15.345205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 21:18:15.345231       1 cache.go:39] Caches are synced for autoregister controller
	I1202 21:18:15.345396       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 21:18:15.353180       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 21:18:15.358649       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 21:18:15.358677       1 policy_source.go:240] refreshing policies
	I1202 21:18:15.370009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1202 21:18:15.400381       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 21:18:15.686281       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 21:18:16.120607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 21:18:17.081033       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 21:18:17.202810       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 21:18:17.289334       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 21:18:17.301754       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 21:18:18.952350       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 21:18:19.001113       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 21:18:19.051278       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 21:18:30.769000       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.175.34"}
	I1202 21:18:42.967650       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.145.220"}
	I1202 21:18:52.627321       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.0.40"}
	E1202 21:19:00.469601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50536: use of closed network connection
	E1202 21:19:00.946515       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1202 21:19:07.439249       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.222.232"}
	I1202 21:28:15.278860       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [56d84b80caf614f1ee2fba31824955e5b12635a7af57a54c491cbe9c8b385d06] <==
	I1202 21:18:18.661575       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 21:18:18.663742       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 21:18:18.663753       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 21:18:18.667029       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 21:18:18.670352       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 21:18:18.671597       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 21:18:18.674928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 21:18:18.678085       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 21:18:18.690344       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 21:18:18.691555       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 21:18:18.693988       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 21:18:18.694077       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 21:18:18.694148       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-218190"
	I1202 21:18:18.694193       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 21:18:18.694238       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 21:18:18.694285       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 21:18:18.694383       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 21:18:18.694445       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 21:18:18.694567       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 21:18:18.694647       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 21:18:18.694907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 21:18:18.695893       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 21:18:18.695945       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 21:18:18.696390       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 21:18:18.701626       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-controller-manager [82b4913f52c103abce25613a5b70261f4d284511e73c33bec6d364372fb9a5c1] <==
	I1202 21:17:32.814886       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 21:17:32.814893       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 21:17:32.817030       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 21:17:32.819976       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 21:17:32.819988       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 21:17:32.822375       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 21:17:32.823413       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 21:17:32.825670       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 21:17:32.843086       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 21:17:32.845574       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 21:17:32.845629       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 21:17:32.845643       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 21:17:32.845673       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 21:17:32.845701       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 21:17:32.852214       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 21:17:32.855351       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 21:17:32.859621       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 21:17:32.860809       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 21:17:32.860895       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 21:17:32.860992       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-218190"
	I1202 21:17:32.861041       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 21:17:32.866244       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 21:17:32.868507       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 21:17:32.872757       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 21:17:32.874950       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [03a913b7cac5a397e8a42b8fac42e72debacab2dcf69556753038184a103d9b4] <==
	I1202 21:18:16.043708       1 server_linux.go:53] "Using iptables proxy"
	I1202 21:18:16.144689       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 21:18:16.246825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 21:18:16.246929       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 21:18:16.247103       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 21:18:16.285880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 21:18:16.286025       1 server_linux.go:132] "Using iptables Proxier"
	I1202 21:18:16.297336       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 21:18:16.298073       1 server.go:527] "Version info" version="v1.34.2"
	I1202 21:18:16.298412       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 21:18:16.304508       1 config.go:200] "Starting service config controller"
	I1202 21:18:16.304590       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 21:18:16.304633       1 config.go:106] "Starting endpoint slice config controller"
	I1202 21:18:16.304661       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 21:18:16.304697       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 21:18:16.304725       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 21:18:16.305396       1 config.go:309] "Starting node config controller"
	I1202 21:18:16.305458       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 21:18:16.305490       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 21:18:16.406091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 21:18:16.406243       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 21:18:16.405757       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [bac927666c5c2fa5fd82e7d087ac16576e9dd4425138783bc61b3a116de8a2a2] <==
	I1202 21:17:29.820317       1 server_linux.go:53] "Using iptables proxy"
	I1202 21:17:30.819175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 21:17:30.920325       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 21:17:30.920434       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 21:17:30.920546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 21:17:30.944153       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 21:17:30.944268       1 server_linux.go:132] "Using iptables Proxier"
	I1202 21:17:30.950629       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 21:17:30.950901       1 server.go:527] "Version info" version="v1.34.2"
	I1202 21:17:30.951200       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 21:17:30.952537       1 config.go:200] "Starting service config controller"
	I1202 21:17:30.952623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 21:17:30.952693       1 config.go:106] "Starting endpoint slice config controller"
	I1202 21:17:30.952745       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 21:17:30.952784       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 21:17:30.952829       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 21:17:30.953545       1 config.go:309] "Starting node config controller"
	I1202 21:17:30.953619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 21:17:30.953650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 21:17:31.052752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 21:17:31.052913       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 21:17:31.052930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2ca345d71938d2be7806dd9f5d396868556008b6c6693317e2b1ffbd2e4895e4] <==
	I1202 21:17:28.950710       1 serving.go:386] Generated self-signed cert in-memory
	I1202 21:17:30.985556       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 21:17:30.986470       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 21:17:30.991216       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 21:17:30.991262       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1202 21:17:30.991307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 21:17:30.991321       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 21:17:30.991343       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 21:17:30.991354       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 21:17:30.993141       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 21:17:30.993224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 21:17:31.091510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 21:17:31.091655       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1202 21:17:31.091813       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 21:17:54.889870       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 21:17:54.889903       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 21:17:54.889920       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 21:17:54.889946       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 21:17:54.889976       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1202 21:17:54.889991       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 21:17:54.890373       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 21:17:54.890409       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [30e72d8cf7c2cae0085f47bc8d3f6d93cb1fa0f726e039f1b8b8e4e5f1d78ffa] <==
	I1202 21:18:12.419243       1 serving.go:386] Generated self-signed cert in-memory
	W1202 21:18:15.254588       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 21:18:15.254619       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 21:18:15.254630       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 21:18:15.254637       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 21:18:15.341733       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 21:18:15.341768       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 21:18:15.351095       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 21:18:15.351139       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 21:18:15.351944       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 21:18:15.352172       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 21:18:15.451826       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 21:26:15 functional-218190 kubelet[3849]: E1202 21:26:15.616700    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:26:17 functional-218190 kubelet[3849]: E1202 21:26:17.615822    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:26:29 functional-218190 kubelet[3849]: E1202 21:26:29.615872    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:26:30 functional-218190 kubelet[3849]: E1202 21:26:30.616302    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:26:41 functional-218190 kubelet[3849]: E1202 21:26:41.616739    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:26:43 functional-218190 kubelet[3849]: E1202 21:26:43.616000    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:26:52 functional-218190 kubelet[3849]: E1202 21:26:52.616796    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:26:57 functional-218190 kubelet[3849]: E1202 21:26:57.616306    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:27:06 functional-218190 kubelet[3849]: E1202 21:27:06.616137    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:27:10 functional-218190 kubelet[3849]: E1202 21:27:10.616356    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:27:20 functional-218190 kubelet[3849]: E1202 21:27:20.617346    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:27:24 functional-218190 kubelet[3849]: E1202 21:27:24.617032    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:27:34 functional-218190 kubelet[3849]: E1202 21:27:34.617738    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:27:37 functional-218190 kubelet[3849]: E1202 21:27:37.615772    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:27:47 functional-218190 kubelet[3849]: E1202 21:27:47.615802    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:27:51 functional-218190 kubelet[3849]: E1202 21:27:51.615989    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:27:58 functional-218190 kubelet[3849]: E1202 21:27:58.615931    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:28:03 functional-218190 kubelet[3849]: E1202 21:28:03.616198    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:28:11 functional-218190 kubelet[3849]: E1202 21:28:11.616448    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:28:18 functional-218190 kubelet[3849]: E1202 21:28:18.616437    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:28:26 functional-218190 kubelet[3849]: E1202 21:28:26.616720    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:28:32 functional-218190 kubelet[3849]: E1202 21:28:32.616938    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:28:41 functional-218190 kubelet[3849]: E1202 21:28:41.616091    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	Dec 02 21:28:47 functional-218190 kubelet[3849]: E1202 21:28:47.616610    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6p88f" podUID="af52929e-09cf-4e4f-817a-ffdd19de5bc8"
	Dec 02 21:28:53 functional-218190 kubelet[3849]: E1202 21:28:53.616530    3849 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t8mwf" podUID="322a130f-9a4f-4cb7-a7c1-dc6a5ada78da"
	
	
	==> storage-provisioner [76f8f155db498c4f9e6544c93ee77b005d408f20c7db1988ab64c180be55c615] <==
	I1202 21:17:26.293240       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 21:17:30.147366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 21:17:30.147508       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 21:17:30.196119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:33.809312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:38.069856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:41.668787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:44.722486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:47.745256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:47.750860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 21:17:47.751119       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 21:17:47.753238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-218190_25373a38-f0a4-4de0-8d55-dffde4a65286!
	I1202 21:17:47.754029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13f9d572-441c-4b15-b447-1caa248d8904", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-218190_25373a38-f0a4-4de0-8d55-dffde4a65286 became leader
	W1202 21:17:47.754454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:47.760405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 21:17:47.853614       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-218190_25373a38-f0a4-4de0-8d55-dffde4a65286!
	W1202 21:17:49.764173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:49.769783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:51.773476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:51.778494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:53.781513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:17:53.788269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7b0bbc346328d5bb0283e7a36ee97f2d88c87cab9cc4f4b0db3fe3d9b3e5d732] <==
	W1202 21:28:30.128458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:32.131504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:32.138362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:34.141414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:34.146066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:36.149157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:36.153576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:38.156140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:38.160506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:40.163794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:40.170518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:42.179498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:42.185556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:44.188718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:44.193342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:46.196938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:46.203810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:48.206531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:48.210973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:50.214682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:50.218835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:52.222006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:52.228481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:54.231896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 21:28:54.240772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-218190 -n functional-218190
helpers_test.go:269: (dbg) Run:  kubectl --context functional-218190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-t8mwf hello-node-connect-7d85dfc575-6p88f
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-218190 describe pod hello-node-75c85bcc94-t8mwf hello-node-connect-7d85dfc575-6p88f
helpers_test.go:290: (dbg) kubectl --context functional-218190 describe pod hello-node-75c85bcc94-t8mwf hello-node-connect-7d85dfc575-6p88f:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-t8mwf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-218190/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 21:19:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wlrv8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wlrv8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m48s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-t8mwf to functional-218190
	  Normal   Pulling    6m58s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m44s (x20 over 9m48s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m31s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-6p88f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-218190/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 21:18:52 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gzlz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6gzlz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6p88f to functional-218190
	  Normal   Pulling    7m10s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    5m1s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m1s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image load --daemon kicbase/echo-server:functional-218190 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-218190" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image load --daemon kicbase/echo-server:functional-218190 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-218190" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-218190
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image load --daemon kicbase/echo-server:functional-218190 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-218190" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image save kicbase/echo-server:functional-218190 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 21:18:42.031153  471208 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:18:42.031414  471208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:18:42.031446  471208 out.go:374] Setting ErrFile to fd 2...
	I1202 21:18:42.031469  471208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:18:42.031826  471208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:18:42.032568  471208 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:18:42.032799  471208 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:18:42.033407  471208 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
	I1202 21:18:42.057365  471208 ssh_runner.go:195] Run: systemctl --version
	I1202 21:18:42.057472  471208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
	I1202 21:18:42.102736  471208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
	I1202 21:18:42.239854  471208 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1202 21:18:42.239915  471208 cache_images.go:255] Failed to load cached images for "functional-218190": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1202 21:18:42.239940  471208 cache_images.go:267] failed pushing to: functional-218190

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-218190
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image save --daemon kicbase/echo-server:functional-218190 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-218190
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-218190: exit status 1 (25.933689ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-218190

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-218190

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-218190 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-218190 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-t8mwf" [322a130f-9a4f-4cb7-a7c1-dc6a5ada78da] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1202 21:21:18.469528  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:21:46.177540  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:26:18.469404  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-218190 -n functional-218190
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-02 21:29:07.851159267 +0000 UTC m=+1259.522221876
functional_test.go:1460: (dbg) Run:  kubectl --context functional-218190 describe po hello-node-75c85bcc94-t8mwf -n default
functional_test.go:1460: (dbg) kubectl --context functional-218190 describe po hello-node-75c85bcc94-t8mwf -n default:
Name:             hello-node-75c85bcc94-t8mwf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-218190/192.168.49.2
Start Time:       Tue, 02 Dec 2025 21:19:07 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wlrv8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wlrv8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-t8mwf to functional-218190
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-218190 logs hello-node-75c85bcc94-t8mwf -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-218190 logs hello-node-75c85bcc94-t8mwf -n default: exit status 1 (130.408896ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-t8mwf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-218190 logs hello-node-75c85bcc94-t8mwf -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 service --namespace=default --https --url hello-node: exit status 115 (463.802911ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-218190 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 service hello-node --url --format={{.IP}}: exit status 115 (515.050796ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-218190 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 service hello-node --url: exit status 115 (689.358661ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-218190 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31425
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (508.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1202 21:31:18.472154  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:41.541773  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.595301  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.601765  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.613283  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.634785  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.676255  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.757696  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:42.919331  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:43.240993  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:43.883142  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:45.164716  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:47.727125  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:52.849131  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:34:03.091433  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:34:23.572777  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:35:04.535832  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:36:18.471186  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:36:26.460687  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m27.082451977s)

                                                
                                                
-- stdout --
	* [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:35791
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:35791 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-066896 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-066896 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203058s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 6 (345.168738ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 21:37:52.932707  482815 status.go:458] kubeconfig endpoint: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount2 --alsologtostderr -v=1                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount          │ -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount3 --alsologtostderr -v=1                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh            │ functional-218190 ssh findmnt -T /mount1                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh findmnt -T /mount2                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh findmnt -T /mount3                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount          │ -p functional-218190 --kill=true                                                                                                                  │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service list                                                                                                                    │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service        │ functional-218190 service list -o json                                                                                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start          │ -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service --namespace=default --https --url hello-node                                                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-218190 --alsologtostderr -v=1                                                                                    │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service        │ functional-218190 service hello-node --url --format={{.IP}}                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service hello-node --url                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format short --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image          │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete         │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start          │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:29:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:29:25.541747  476702 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:25.541854  476702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:25.541858  476702 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:25.541864  476702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:25.542102  476702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:29:25.542492  476702 out.go:368] Setting JSON to false
	I1202 21:29:25.543326  476702 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11494,"bootTime":1764699472,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:29:25.543390  476702 start.go:143] virtualization:  
	I1202 21:29:25.547707  476702 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:29:25.552092  476702 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:25.552158  476702 notify.go:221] Checking for updates...
	I1202 21:29:25.558709  476702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:25.561926  476702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:29:25.565014  476702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:29:25.568028  476702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:25.570991  476702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:25.574394  476702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:25.601284  476702 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:25.601397  476702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:25.659200  476702 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 21:29:25.650163923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:25.659288  476702 docker.go:319] overlay module found
	I1202 21:29:25.662505  476702 out.go:179] * Using the docker driver based on user configuration
	I1202 21:29:25.665476  476702 start.go:309] selected driver: docker
	I1202 21:29:25.665483  476702 start.go:927] validating driver "docker" against <nil>
	I1202 21:29:25.665494  476702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:25.666239  476702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:25.720734  476702 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 21:29:25.711753155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:25.720885  476702 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:29:25.721091  476702 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:29:25.724203  476702 out.go:179] * Using Docker driver with root privileges
	I1202 21:29:25.727207  476702 cni.go:84] Creating CNI manager for ""
	I1202 21:29:25.727260  476702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:29:25.727269  476702 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 21:29:25.727338  476702 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:25.732356  476702 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:29:25.735140  476702 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:29:25.738174  476702 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:29:25.741064  476702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:29:25.741141  476702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:29:25.759091  476702 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:29:25.759101  476702 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:29:25.812983  476702 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:29:26.007913  476702 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:29:26.008162  476702 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008266  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:29:26.008276  476702 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 141.138µs
	I1202 21:29:26.008282  476702 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:29:26.008289  476702 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:29:26.008300  476702 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008319  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json: {Name:mk0652d48f2085169c19b04c6d9462b8846a80f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:26.008342  476702 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008354  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:29:26.008363  476702 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 64.928µs
	I1202 21:29:26.008370  476702 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:29:26.008372  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:29:26.008384  476702 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.242µs
	I1202 21:29:26.008389  476702 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:29:26.008399  476702 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008479  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:29:26.008485  476702 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 86.819µs
	I1202 21:29:26.008489  476702 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:29:26.008497  476702 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008523  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:29:26.008527  476702 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.491µs
	I1202 21:29:26.008532  476702 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:29:26.008538  476702 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:29:26.008540  476702 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008563  476702 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008572  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:29:26.008576  476702 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 37.342µs
	I1202 21:29:26.008580  476702 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:29:26.008588  476702 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008609  476702 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:29:26.008620  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:29:26.008632  476702 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 37.227µs
	I1202 21:29:26.008636  476702 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:29:26.008638  476702 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:29:26.008640  476702 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.512µs
	I1202 21:29:26.008644  476702 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:29:26.008644  476702 start.go:364] duration metric: took 71.993µs to acquireMachinesLock for "functional-066896"
	I1202 21:29:26.008657  476702 cache.go:87] Successfully saved all images to host disk.
	I1202 21:29:26.008666  476702 start.go:93] Provisioning new machine with config: &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:29:26.008726  476702 start.go:125] createHost starting for "" (driver="docker")
	I1202 21:29:26.014186  476702 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1202 21:29:26.014527  476702 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:35791 to docker env.
	I1202 21:29:26.014610  476702 start.go:159] libmachine.API.Create for "functional-066896" (driver="docker")
	I1202 21:29:26.014632  476702 client.go:173] LocalClient.Create starting
	I1202 21:29:26.014713  476702 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem
	I1202 21:29:26.014749  476702 main.go:143] libmachine: Decoding PEM data...
	I1202 21:29:26.014794  476702 main.go:143] libmachine: Parsing certificate...
	I1202 21:29:26.014855  476702 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem
	I1202 21:29:26.014877  476702 main.go:143] libmachine: Decoding PEM data...
	I1202 21:29:26.014887  476702 main.go:143] libmachine: Parsing certificate...
	I1202 21:29:26.015263  476702 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 21:29:26.033090  476702 cli_runner.go:211] docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 21:29:26.033168  476702 network_create.go:284] running [docker network inspect functional-066896] to gather additional debugging logs...
	I1202 21:29:26.033184  476702 cli_runner.go:164] Run: docker network inspect functional-066896
	W1202 21:29:26.050242  476702 cli_runner.go:211] docker network inspect functional-066896 returned with exit code 1
	I1202 21:29:26.050263  476702 network_create.go:287] error running [docker network inspect functional-066896]: docker network inspect functional-066896: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-066896 not found
	I1202 21:29:26.050277  476702 network_create.go:289] output of [docker network inspect functional-066896]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-066896 not found
	
	** /stderr **
	I1202 21:29:26.050391  476702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:29:26.067554  476702 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001954b70}
	I1202 21:29:26.067587  476702 network_create.go:124] attempt to create docker network functional-066896 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 21:29:26.067646  476702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-066896 functional-066896
	I1202 21:29:26.122038  476702 network_create.go:108] docker network functional-066896 192.168.49.0/24 created
	I1202 21:29:26.122072  476702 kic.go:121] calculated static IP "192.168.49.2" for the "functional-066896" container
	I1202 21:29:26.122147  476702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 21:29:26.137665  476702 cli_runner.go:164] Run: docker volume create functional-066896 --label name.minikube.sigs.k8s.io=functional-066896 --label created_by.minikube.sigs.k8s.io=true
	I1202 21:29:26.156244  476702 oci.go:103] Successfully created a docker volume functional-066896
	I1202 21:29:26.156318  476702 cli_runner.go:164] Run: docker run --rm --name functional-066896-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-066896 --entrypoint /usr/bin/test -v functional-066896:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 21:29:26.680664  476702 oci.go:107] Successfully prepared a docker volume functional-066896
	I1202 21:29:26.680715  476702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 21:29:26.680865  476702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 21:29:26.680970  476702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 21:29:26.736917  476702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-066896 --name functional-066896 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-066896 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-066896 --network functional-066896 --ip 192.168.49.2 --volume functional-066896:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 21:29:27.042756  476702 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Running}}
	I1202 21:29:27.067523  476702 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:29:27.094092  476702 cli_runner.go:164] Run: docker exec functional-066896 stat /var/lib/dpkg/alternatives/iptables
	I1202 21:29:27.146180  476702 oci.go:144] the created container "functional-066896" has a running status.
	I1202 21:29:27.146209  476702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa...
	I1202 21:29:27.508753  476702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 21:29:27.534366  476702 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:29:27.566200  476702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 21:29:27.566212  476702 kic_runner.go:114] Args: [docker exec --privileged functional-066896 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 21:29:27.620858  476702 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:29:27.648885  476702 machine.go:94] provisionDockerMachine start ...
	I1202 21:29:27.648969  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:27.676553  476702 main.go:143] libmachine: Using SSH client type: native
	I1202 21:29:27.676895  476702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:29:27.676903  476702 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:29:27.677575  476702 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 21:29:30.830527  476702 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:29:30.830541  476702 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:29:30.830603  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:30.848635  476702 main.go:143] libmachine: Using SSH client type: native
	I1202 21:29:30.848933  476702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:29:30.848942  476702 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:29:31.011664  476702 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:29:31.011741  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:31.031158  476702 main.go:143] libmachine: Using SSH client type: native
	I1202 21:29:31.031460  476702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:29:31.031473  476702 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:29:31.179317  476702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:29:31.179333  476702 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:29:31.179355  476702 ubuntu.go:190] setting up certificates
	I1202 21:29:31.179362  476702 provision.go:84] configureAuth start
	I1202 21:29:31.179420  476702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:29:31.195929  476702 provision.go:143] copyHostCerts
	I1202 21:29:31.195997  476702 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:29:31.196004  476702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:29:31.196079  476702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:29:31.196177  476702 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:29:31.196181  476702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:29:31.196208  476702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:29:31.196264  476702 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:29:31.196267  476702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:29:31.196288  476702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:29:31.196332  476702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:29:31.407035  476702 provision.go:177] copyRemoteCerts
	I1202 21:29:31.407088  476702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:29:31.407129  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:31.425032  476702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:29:31.526402  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:29:31.543167  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:29:31.560295  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:29:31.577085  476702 provision.go:87] duration metric: took 397.70058ms to configureAuth
	I1202 21:29:31.577102  476702 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:29:31.577294  476702 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:29:31.577388  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:31.597412  476702 main.go:143] libmachine: Using SSH client type: native
	I1202 21:29:31.597717  476702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:29:31.597728  476702 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:29:31.910342  476702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:29:31.910356  476702 machine.go:97] duration metric: took 4.261459332s to provisionDockerMachine
	I1202 21:29:31.910366  476702 client.go:176] duration metric: took 5.895729743s to LocalClient.Create
	I1202 21:29:31.910378  476702 start.go:167] duration metric: took 5.895768973s to libmachine.API.Create "functional-066896"
	I1202 21:29:31.910384  476702 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:29:31.910395  476702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:29:31.910472  476702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:29:31.910511  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:31.927652  476702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:29:32.031397  476702 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:29:32.034950  476702 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:29:32.034968  476702 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:29:32.034978  476702 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:29:32.035067  476702 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:29:32.035161  476702 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:29:32.035253  476702 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:29:32.035304  476702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:29:32.044274  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:29:32.062402  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:29:32.079962  476702 start.go:296] duration metric: took 169.562873ms for postStartSetup
	I1202 21:29:32.080326  476702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:29:32.097199  476702 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:29:32.097473  476702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:29:32.097524  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:32.119240  476702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:29:32.219977  476702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:29:32.224460  476702 start.go:128] duration metric: took 6.215720847s to createHost
	I1202 21:29:32.224475  476702 start.go:83] releasing machines lock for "functional-066896", held for 6.215820065s
	I1202 21:29:32.224545  476702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:29:32.245851  476702 out.go:179] * Found network options:
	I1202 21:29:32.248720  476702 out.go:179]   - HTTP_PROXY=localhost:35791
	W1202 21:29:32.251668  476702 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1202 21:29:32.254524  476702 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1202 21:29:32.257445  476702 ssh_runner.go:195] Run: cat /version.json
	I1202 21:29:32.257486  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:32.257512  476702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:29:32.257576  476702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:29:32.276276  476702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:29:32.277247  476702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:29:32.374612  476702 ssh_runner.go:195] Run: systemctl --version
	I1202 21:29:32.480660  476702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:29:32.517818  476702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:29:32.522181  476702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:29:32.522246  476702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:29:32.549427  476702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 21:29:32.549450  476702 start.go:496] detecting cgroup driver to use...
	I1202 21:29:32.549482  476702 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:29:32.549540  476702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:29:32.565558  476702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:29:32.578867  476702 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:29:32.578944  476702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:29:32.600536  476702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:29:32.622398  476702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:29:32.735981  476702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:29:32.860888  476702 docker.go:234] disabling docker service ...
	I1202 21:29:32.860961  476702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:29:32.882819  476702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:29:32.896293  476702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:29:33.006815  476702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:29:33.129580  476702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:29:33.144543  476702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:29:33.160069  476702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:29:33.160151  476702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.169817  476702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:29:33.169879  476702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.179163  476702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.187860  476702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.196817  476702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:29:33.204767  476702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.213484  476702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.227942  476702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:29:33.237261  476702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:29:33.244816  476702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:29:33.251998  476702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:29:33.367404  476702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:29:33.532905  476702 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:29:33.532965  476702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:29:33.536672  476702 start.go:564] Will wait 60s for crictl version
	I1202 21:29:33.536729  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:33.540066  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:29:33.564965  476702 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:29:33.565065  476702 ssh_runner.go:195] Run: crio --version
	I1202 21:29:33.592708  476702 ssh_runner.go:195] Run: crio --version
	I1202 21:29:33.625189  476702 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:29:33.627890  476702 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:29:33.642636  476702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:29:33.646474  476702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:29:33.656055  476702 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:29:33.656153  476702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:29:33.656195  476702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:29:33.680525  476702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 21:29:33.680538  476702 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 21:29:33.680598  476702 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:33.680600  476702 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:33.680787  476702 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:33.680791  476702 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 21:29:33.680865  476702 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:33.680880  476702 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:33.680934  476702 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:33.680958  476702 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:33.682271  476702 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:33.682628  476702 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:33.682857  476702 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 21:29:33.682987  476702 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:33.683230  476702 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:33.683285  476702 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:33.683586  476702 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:33.683641  476702 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:34.011823  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 21:29:34.033675  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:34.035102  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:34.045333  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:34.053245  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:34.061875  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:34.094724  476702 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 21:29:34.094755  476702 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 21:29:34.094808  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.094877  476702 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 21:29:34.094890  476702 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:34.094912  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.108551  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:34.178021  476702 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 21:29:34.178060  476702 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:34.178120  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.178179  476702 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 21:29:34.178201  476702 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:34.178221  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.178278  476702 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 21:29:34.178289  476702 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:34.178307  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.178363  476702 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 21:29:34.178373  476702 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:34.178392  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.178470  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:34.178548  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 21:29:34.203264  476702 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 21:29:34.203310  476702 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:34.203358  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:34.230497  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 21:29:34.230570  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:34.230615  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:34.230675  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:34.230726  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:34.230778  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:34.230848  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:34.333376  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:34.333393  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 21:29:34.333443  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:34.333524  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:34.333545  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:34.333597  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:34.333605  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:29:34.435875  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 21:29:34.435970  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 21:29:34.436048  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:29:34.442990  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 21:29:34.443111  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 21:29:34.443210  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:29:34.443267  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:29:34.443324  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 21:29:34.443381  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:29:34.500232  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 21:29:34.500319  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 21:29:34.500387  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 21:29:34.500398  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 21:29:34.527993  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 21:29:34.528081  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 21:29:34.528155  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 21:29:34.528195  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 21:29:34.528237  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 21:29:34.528275  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 21:29:34.528315  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 21:29:34.528325  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 21:29:34.528394  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 21:29:34.528443  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 21:29:34.528482  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 21:29:34.528492  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 21:29:34.559273  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 21:29:34.559301  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 21:29:34.559357  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 21:29:34.559366  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 21:29:34.559417  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 21:29:34.559425  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 21:29:34.559460  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 21:29:34.559467  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 21:29:34.566242  476702 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 21:29:34.566314  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 21:29:34.907519  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1202 21:29:34.970028  476702 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 21:29:34.970215  476702 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:35.097799  476702 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 21:29:35.097861  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 21:29:35.148061  476702 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 21:29:35.148104  476702 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:35.148159  476702 ssh_runner.go:195] Run: which crictl
	I1202 21:29:36.638319  476702 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.540436689s)
	I1202 21:29:36.638335  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 21:29:36.638351  476702 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 21:29:36.638353  476702 ssh_runner.go:235] Completed: which crictl: (1.490179946s)
	I1202 21:29:36.638402  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 21:29:36.638411  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:37.836171  476702 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.197748971s)
	I1202 21:29:37.836188  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 21:29:37.836201  476702 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.19777362s)
	I1202 21:29:37.836206  476702 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 21:29:37.836253  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:37.836254  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 21:29:39.018684  476702 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.182408739s)
	I1202 21:29:39.018701  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 21:29:39.018714  476702 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.182444382s)
	I1202 21:29:39.018782  476702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:29:39.018718  476702 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 21:29:39.018828  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 21:29:39.058069  476702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 21:29:39.058160  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 21:29:40.347860  476702 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.329010952s)
	I1202 21:29:40.347878  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 21:29:40.347896  476702 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 21:29:40.347913  476702 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.289739657s)
	I1202 21:29:40.347943  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 21:29:40.347955  476702 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 21:29:40.347977  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 21:29:42.218793  476702 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.870825145s)
	I1202 21:29:42.218812  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 21:29:42.218843  476702 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 21:29:42.218900  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 21:29:43.604137  476702 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.385217392s)
	I1202 21:29:43.604154  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 21:29:43.604170  476702 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 21:29:43.604219  476702 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 21:29:44.148992  476702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 21:29:44.149025  476702 cache_images.go:125] Successfully loaded all cached images
	I1202 21:29:44.149030  476702 cache_images.go:94] duration metric: took 10.468478658s to LoadCachedImages
	I1202 21:29:44.149041  476702 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:29:44.149134  476702 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:29:44.149216  476702 ssh_runner.go:195] Run: crio config
	I1202 21:29:44.208769  476702 cni.go:84] Creating CNI manager for ""
	I1202 21:29:44.208781  476702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:29:44.208803  476702 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:29:44.208824  476702 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:29:44.208942  476702 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:29:44.209012  476702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:29:44.216932  476702 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 21:29:44.216987  476702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:29:44.224869  476702 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 21:29:44.224897  476702 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 21:29:44.224935  476702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:29:44.224948  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 21:29:44.225003  476702 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 21:29:44.225053  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 21:29:44.232835  476702 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 21:29:44.232863  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 21:29:44.244989  476702 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 21:29:44.245013  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 21:29:44.245063  476702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 21:29:44.268086  476702 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 21:29:44.268124  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 21:29:45.026254  476702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:29:45.037524  476702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:29:45.073289  476702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:29:45.093192  476702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 21:29:45.110552  476702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:29:45.115952  476702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:29:45.129601  476702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:29:45.275569  476702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:29:45.296332  476702 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:29:45.296343  476702 certs.go:195] generating shared ca certs ...
	I1202 21:29:45.296358  476702 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:45.296573  476702 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:29:45.296633  476702 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:29:45.296640  476702 certs.go:257] generating profile certs ...
	I1202 21:29:45.296697  476702 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:29:45.296707  476702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt with IP's: []
	I1202 21:29:45.818269  476702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt ...
	I1202 21:29:45.818292  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: {Name:mkb8d0bcc2be2d0ef0d20afb9444fc3a97ac57f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:45.818496  476702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key ...
	I1202 21:29:45.818503  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key: {Name:mkeb282e60f10ff3593768e99b3b4f5d41955738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:45.818589  476702 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:29:45.818600  476702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt.afad1c23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 21:29:45.966365  476702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt.afad1c23 ...
	I1202 21:29:45.966379  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt.afad1c23: {Name:mkbc57e48e0a3f5c657092337f7e3ee3fb08e0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:45.966566  476702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23 ...
	I1202 21:29:45.966578  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23: {Name:mk908815b1369e0040397c5a9e2e04abd1a43901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:45.966662  476702 certs.go:382] copying /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt.afad1c23 -> /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt
	I1202 21:29:45.966736  476702 certs.go:386] copying /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23 -> /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key
	I1202 21:29:45.966787  476702 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:29:45.966797  476702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt with IP's: []
	I1202 21:29:46.260514  476702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt ...
	I1202 21:29:46.260530  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt: {Name:mkf5d9d2aaad045de61b06e902305ebcab96eb81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:46.260741  476702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key ...
	I1202 21:29:46.260749  476702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key: {Name:mk813ba167fc250a583d4baaf54c036fdc9c9932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:29:46.260935  476702 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:29:46.260975  476702 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:29:46.260985  476702 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:29:46.261024  476702 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:29:46.261050  476702 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:29:46.261075  476702 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:29:46.261123  476702 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:29:46.262078  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:29:46.283938  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:29:46.302402  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:29:46.321236  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:29:46.339101  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:29:46.357777  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:29:46.376014  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:29:46.394170  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:29:46.411556  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:29:46.429422  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:29:46.447649  476702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:29:46.465082  476702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:29:46.479390  476702 ssh_runner.go:195] Run: openssl version
	I1202 21:29:46.485905  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:29:46.495277  476702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:29:46.499129  476702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:29:46.499187  476702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:29:46.540358  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:29:46.549018  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:29:46.557432  476702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:29:46.561860  476702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:29:46.561935  476702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:29:46.607379  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:29:46.617016  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:29:46.626230  476702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:29:46.630317  476702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:29:46.630376  476702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:29:46.671774  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:29:46.680551  476702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:29:46.684438  476702 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 21:29:46.684482  476702 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:46.684556  476702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:29:46.684612  476702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:29:46.711562  476702 cri.go:89] found id: ""
	I1202 21:29:46.711628  476702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:29:46.719774  476702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:29:46.727682  476702 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:29:46.727737  476702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:29:46.735534  476702 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:29:46.735544  476702 kubeadm.go:158] found existing configuration files:
	
	I1202 21:29:46.735597  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:29:46.743276  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:29:46.743350  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:29:46.750656  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:29:46.758076  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:29:46.758134  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:29:46.765578  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:29:46.773346  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:29:46.773402  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:29:46.780641  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:29:46.788164  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:29:46.788231  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:29:46.795855  476702 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:29:46.835958  476702 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:29:46.836009  476702 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:29:46.926161  476702 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:29:46.926225  476702 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:29:46.926268  476702 kubeadm.go:319] OS: Linux
	I1202 21:29:46.926314  476702 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:29:46.926370  476702 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:29:46.926418  476702 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:29:46.926465  476702 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:29:46.926518  476702 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:29:46.926575  476702 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:29:46.926625  476702 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:29:46.926676  476702 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:29:46.926722  476702 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:29:46.989119  476702 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:29:46.989260  476702 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:29:46.989378  476702 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:29:47.023402  476702 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:29:47.031218  476702 out.go:252]   - Generating certificates and keys ...
	I1202 21:29:47.031310  476702 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:29:47.031375  476702 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:29:47.436001  476702 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 21:29:47.498600  476702 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 21:29:47.744530  476702 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 21:29:47.958923  476702 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 21:29:48.220011  476702 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 21:29:48.220281  476702 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-066896 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:29:48.507740  476702 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 21:29:48.508033  476702 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-066896 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:29:48.725423  476702 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 21:29:48.905687  476702 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 21:29:49.099947  476702 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 21:29:49.100153  476702 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:29:49.292187  476702 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:29:49.461446  476702 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:29:49.563584  476702 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:29:49.856835  476702 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:29:50.246492  476702 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:29:50.247576  476702 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:29:50.252902  476702 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:29:50.272719  476702 out.go:252]   - Booting up control plane ...
	I1202 21:29:50.272821  476702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:29:50.272898  476702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:29:50.272963  476702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:29:50.286861  476702 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:29:50.286964  476702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:29:50.294486  476702 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:29:50.294755  476702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:29:50.294798  476702 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:29:50.431214  476702 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:29:50.431326  476702 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:33:50.431349  476702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000203058s
	I1202 21:33:50.431367  476702 kubeadm.go:319] 
	I1202 21:33:50.431423  476702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:33:50.431456  476702 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:33:50.431563  476702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:33:50.431566  476702 kubeadm.go:319] 
	I1202 21:33:50.431677  476702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:33:50.431708  476702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:33:50.431738  476702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:33:50.431741  476702 kubeadm.go:319] 
	I1202 21:33:50.435466  476702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:33:50.435921  476702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:33:50.436036  476702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:33:50.436344  476702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:33:50.436358  476702 kubeadm.go:319] 
	I1202 21:33:50.436426  476702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:33:50.436557  476702 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-066896 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-066896 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000203058s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:33:50.436653  476702 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:33:50.844188  476702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:33:50.857663  476702 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:33:50.857719  476702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:33:50.865734  476702 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:33:50.865744  476702 kubeadm.go:158] found existing configuration files:
	
	I1202 21:33:50.865800  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:33:50.873683  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:33:50.873740  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:33:50.881234  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:33:50.889150  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:33:50.889208  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:33:50.896959  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:33:50.904657  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:33:50.904712  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:33:50.912407  476702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:33:50.920109  476702 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:33:50.920174  476702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:33:50.927928  476702 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:33:50.965288  476702 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:33:50.965367  476702 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:33:51.032335  476702 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:33:51.032403  476702 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:33:51.032438  476702 kubeadm.go:319] OS: Linux
	I1202 21:33:51.032481  476702 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:33:51.032528  476702 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:33:51.032574  476702 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:33:51.032621  476702 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:33:51.032667  476702 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:33:51.032721  476702 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:33:51.032765  476702 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:33:51.032812  476702 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:33:51.032856  476702 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:33:51.096125  476702 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:33:51.096251  476702 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:33:51.096379  476702 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:33:51.109055  476702 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:33:51.110866  476702 out.go:252]   - Generating certificates and keys ...
	I1202 21:33:51.110962  476702 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:33:51.111045  476702 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:33:51.111133  476702 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:33:51.111204  476702 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:33:51.111271  476702 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:33:51.111330  476702 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:33:51.111398  476702 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:33:51.111565  476702 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:33:51.111648  476702 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:33:51.111802  476702 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:33:51.112085  476702 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:33:51.112143  476702 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:33:51.234675  476702 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:33:51.473701  476702 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:33:51.527014  476702 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:33:51.752145  476702 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:33:51.971667  476702 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:33:51.972268  476702 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:33:51.974986  476702 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:33:51.976575  476702 out.go:252]   - Booting up control plane ...
	I1202 21:33:51.976670  476702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:33:51.976970  476702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:33:51.978573  476702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:33:51.993631  476702 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:33:51.993806  476702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:33:52.004210  476702 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:33:52.004500  476702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:33:52.004695  476702 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:33:52.139544  476702 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:33:52.139653  476702 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:37:52.137779  476702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000050047s
	I1202 21:37:52.137802  476702 kubeadm.go:319] 
	I1202 21:37:52.137894  476702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:37:52.137946  476702 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:37:52.138055  476702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:37:52.138059  476702 kubeadm.go:319] 
	I1202 21:37:52.138162  476702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:37:52.138193  476702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:37:52.138222  476702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:37:52.138225  476702 kubeadm.go:319] 
	I1202 21:37:52.142309  476702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:37:52.142775  476702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:37:52.142884  476702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:37:52.143193  476702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:37:52.143207  476702 kubeadm.go:319] 
	I1202 21:37:52.143284  476702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:37:52.143348  476702 kubeadm.go:403] duration metric: took 8m5.458869189s to StartCluster
	I1202 21:37:52.143379  476702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:37:52.143443  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:37:52.174800  476702 cri.go:89] found id: ""
	I1202 21:37:52.174814  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.174820  476702 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:37:52.174826  476702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:37:52.174883  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:37:52.200717  476702 cri.go:89] found id: ""
	I1202 21:37:52.200730  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.200736  476702 logs.go:284] No container was found matching "etcd"
	I1202 21:37:52.200743  476702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:37:52.200805  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:37:52.226241  476702 cri.go:89] found id: ""
	I1202 21:37:52.226255  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.226262  476702 logs.go:284] No container was found matching "coredns"
	I1202 21:37:52.226268  476702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:37:52.226328  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:37:52.253607  476702 cri.go:89] found id: ""
	I1202 21:37:52.253620  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.253627  476702 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:37:52.253633  476702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:37:52.253691  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:37:52.278839  476702 cri.go:89] found id: ""
	I1202 21:37:52.278854  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.278860  476702 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:37:52.278866  476702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:37:52.278922  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:37:52.303169  476702 cri.go:89] found id: ""
	I1202 21:37:52.303183  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.303190  476702 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:37:52.303195  476702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:37:52.303255  476702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:37:52.328416  476702 cri.go:89] found id: ""
	I1202 21:37:52.328430  476702 logs.go:282] 0 containers: []
	W1202 21:37:52.328437  476702 logs.go:284] No container was found matching "kindnet"
	I1202 21:37:52.328445  476702 logs.go:123] Gathering logs for container status ...
	I1202 21:37:52.328455  476702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:37:52.360972  476702 logs.go:123] Gathering logs for kubelet ...
	I1202 21:37:52.360988  476702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:37:52.426653  476702 logs.go:123] Gathering logs for dmesg ...
	I1202 21:37:52.426674  476702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:37:52.442418  476702 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:37:52.442433  476702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:37:52.515578  476702 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:37:52.502255    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.508371    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.509338    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.510051    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.511612    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:37:52.502255    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.508371    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.509338    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.510051    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:52.511612    5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:37:52.515597  476702 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:37:52.515607  476702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1202 21:37:52.558031  476702 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:37:52.558090  476702 out.go:285] * 
	W1202 21:37:52.558211  476702 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:37:52.558267  476702 out.go:285] * 
	W1202 21:37:52.560433  476702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:37:52.564326  476702 out.go:203] 
	W1202 21:37:52.565774  476702 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:37:52.565823  476702 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:37:52.565846  476702 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:37:52.567491  476702 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:29:34 functional-066896 crio[842]: time="2025-12-02T21:29:34.523293606Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=cfd07ac1-9b25-4fc7-a425-8c9c3368cae1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:34 functional-066896 crio[842]: time="2025-12-02T21:29:34.523346152Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=cfd07ac1-9b25-4fc7-a425-8c9c3368cae1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:36 functional-066896 crio[842]: time="2025-12-02T21:29:36.667440555Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7e038c4a-151f-419a-a362-0c349f0988b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:36 functional-066896 crio[842]: time="2025-12-02T21:29:36.668035335Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=7e038c4a-151f-419a-a362-0c349f0988b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:36 functional-066896 crio[842]: time="2025-12-02T21:29:36.668100427Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=7e038c4a-151f-419a-a362-0c349f0988b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:37 functional-066896 crio[842]: time="2025-12-02T21:29:37.865412955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6f4934a4-ab9d-4859-bef6-db5d4ce5963d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:37 functional-066896 crio[842]: time="2025-12-02T21:29:37.865725934Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=6f4934a4-ab9d-4859-bef6-db5d4ce5963d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:37 functional-066896 crio[842]: time="2025-12-02T21:29:37.865769726Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=6f4934a4-ab9d-4859-bef6-db5d4ce5963d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:39 functional-066896 crio[842]: time="2025-12-02T21:29:39.053192201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b5726fdf-17b4-4d72-8550-b3a11eb6539a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:39 functional-066896 crio[842]: time="2025-12-02T21:29:39.053501554Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=b5726fdf-17b4-4d72-8550-b3a11eb6539a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:39 functional-066896 crio[842]: time="2025-12-02T21:29:39.053578601Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=b5726fdf-17b4-4d72-8550-b3a11eb6539a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:46 functional-066896 crio[842]: time="2025-12-02T21:29:46.992744854Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d2886fed-a628-4243-bf2e-8cd0e52c617d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:47 functional-066896 crio[842]: time="2025-12-02T21:29:46.999164923Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f6d786cc-ab5e-4c4e-a0c7-c713250b8773 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:47 functional-066896 crio[842]: time="2025-12-02T21:29:47.007181119Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=958f83fa-8838-4187-b63c-60d954f83a30 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:47 functional-066896 crio[842]: time="2025-12-02T21:29:47.01333652Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fd38cb3e-f8f6-4fc5-9129-bd9dcc6cfc58 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:47 functional-066896 crio[842]: time="2025-12-02T21:29:47.014495268Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=329253f7-304c-458a-9cd6-6c694faaac5b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:47 functional-066896 crio[842]: time="2025-12-02T21:29:47.015994557Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f194357d-fba5-49a3-80d0-9f0fffc4db93 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:29:47 functional-066896 crio[842]: time="2025-12-02T21:29:47.016835805Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=55b93e80-a5d1-4ea3-95bc-6bf55dc24228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.099520616Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a9196544-ca87-417e-8718-f01ef03f5ab7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.101131184Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cadcb5ff-03f5-4654-aa42-2b1e8bc490e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.102658067Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=73b4a941-59eb-4157-95ab-2ce8cf14790c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.104154582Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cb8a3342-47f1-4932-8cce-92ab4d39612f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.105010139Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=47f8b65a-f6f9-4e62-994d-dba8db7fc1db name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.106322284Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=be250496-28ab-48fe-832d-6ec2eb731a69 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:33:51 functional-066896 crio[842]: time="2025-12-02T21:33:51.107134329Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fd100e07-396f-44b9-a499-99e4d4469253 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:37:53.529094    5614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:53.529535    5614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:53.531184    5614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:53.531716    5614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:37:53.533394    5614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:37:53 up  3:20,  0 user,  load average: 0.22, 0.28, 0.65
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:37:50 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:37:51 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 02 21:37:51 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:37:51 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:37:51 functional-066896 kubelet[5424]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:37:51 functional-066896 kubelet[5424]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:37:51 functional-066896 kubelet[5424]: E1202 21:37:51.459637    5424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:37:51 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:37:51 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:37:52 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 02 21:37:52 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:37:52 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:37:52 functional-066896 kubelet[5435]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:37:52 functional-066896 kubelet[5435]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:37:52 functional-066896 kubelet[5435]: E1202 21:37:52.227389    5435 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:37:52 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:37:52 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:37:52 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 02 21:37:52 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:37:52 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:37:52 functional-066896 kubelet[5527]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:37:52 functional-066896 kubelet[5527]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:37:52 functional-066896 kubelet[5527]: E1202 21:37:52.919963    5527 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:37:52 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:37:52 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 6 (336.200525ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 21:37:53.983880  483034 status.go:458] kubeconfig endpoint: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (508.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 21:37:54.001930  447211 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-066896 --alsologtostderr -v=8
E1202 21:38:42.594439  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:39:10.303037  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:41:18.469293  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:43:42.594631  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-066896 --alsologtostderr -v=8: exit status 80 (6m6.230141201s)

                                                
                                                
-- stdout --
	* [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:37:54.052280  483106 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:37:54.052518  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052549  483106 out.go:374] Setting ErrFile to fd 2...
	I1202 21:37:54.052570  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052830  483106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:37:54.053229  483106 out.go:368] Setting JSON to false
	I1202 21:37:54.054096  483106 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12002,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:37:54.054239  483106 start.go:143] virtualization:  
	I1202 21:37:54.055968  483106 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:37:54.057216  483106 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:37:54.057305  483106 notify.go:221] Checking for updates...
	I1202 21:37:54.059409  483106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:37:54.060390  483106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:54.061474  483106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:37:54.062609  483106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:37:54.063772  483106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:37:54.065317  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:54.065458  483106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:37:54.087852  483106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:37:54.087968  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.157300  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.14827719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.157407  483106 docker.go:319] overlay module found
	I1202 21:37:54.158855  483106 out.go:179] * Using the docker driver based on existing profile
	I1202 21:37:54.160356  483106 start.go:309] selected driver: docker
	I1202 21:37:54.160374  483106 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.160477  483106 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:37:54.160570  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.221500  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.212376823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.221914  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:54.221982  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:54.222036  483106 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.223816  483106 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:37:54.224907  483106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:37:54.226134  483106 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:37:54.227415  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:54.227490  483106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:37:54.247414  483106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:37:54.247439  483106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:37:54.295322  483106 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:37:54.500334  483106 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:37:54.500536  483106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:37:54.500574  483106 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500673  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:37:54.500684  483106 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.936µs
	I1202 21:37:54.500698  483106 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:37:54.500710  483106 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500741  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:37:54.500746  483106 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.194µs
	I1202 21:37:54.500752  483106 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500761  483106 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500788  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:37:54.500788  483106 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:37:54.500792  483106 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.492µs
	I1202 21:37:54.500799  483106 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500809  483106 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500816  483106 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500852  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:37:54.500856  483106 start.go:364] duration metric: took 26.462µs to acquireMachinesLock for "functional-066896"
	I1202 21:37:54.500858  483106 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.838µs
	I1202 21:37:54.500864  483106 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500869  483106 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:37:54.500875  483106 fix.go:54] fixHost starting: 
	I1202 21:37:54.500873  483106 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500901  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:37:54.500905  483106 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.15µs
	I1202 21:37:54.500919  483106 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500928  483106 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500951  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:37:54.500956  483106 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.833µs
	I1202 21:37:54.500961  483106 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:37:54.500970  483106 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500994  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:37:54.500998  483106 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.391µs
	I1202 21:37:54.501003  483106 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:37:54.501011  483106 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.501036  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:37:54.501040  483106 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.097µs
	I1202 21:37:54.501046  483106 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:37:54.501065  483106 cache.go:87] Successfully saved all images to host disk.
	I1202 21:37:54.501197  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:54.517471  483106 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:37:54.517510  483106 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:37:54.519079  483106 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:37:54.519117  483106 machine.go:94] provisionDockerMachine start ...
	I1202 21:37:54.519205  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.536086  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.536422  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.536437  483106 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:37:54.686523  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.686547  483106 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:37:54.686612  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.710674  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.710988  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.711037  483106 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:37:54.868253  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.868331  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.886749  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.887092  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.887115  483106 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:37:55.036431  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:37:55.036522  483106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:37:55.036593  483106 ubuntu.go:190] setting up certificates
	I1202 21:37:55.036621  483106 provision.go:84] configureAuth start
	I1202 21:37:55.036718  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:55.055483  483106 provision.go:143] copyHostCerts
	I1202 21:37:55.055534  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055575  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:37:55.055589  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055670  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:37:55.055775  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055797  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:37:55.055803  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055836  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:37:55.055880  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055901  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:37:55.055908  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055941  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:37:55.055998  483106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:37:55.445716  483106 provision.go:177] copyRemoteCerts
	I1202 21:37:55.445788  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:37:55.445829  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.462295  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:55.566646  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:37:55.566707  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:37:55.584230  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:37:55.584339  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:37:55.601138  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:37:55.601197  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:37:55.619092  483106 provision.go:87] duration metric: took 582.43702ms to configureAuth
	I1202 21:37:55.619117  483106 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:37:55.619308  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:55.619413  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.637231  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:55.637559  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:55.637573  483106 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:37:55.956144  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:37:55.956170  483106 machine.go:97] duration metric: took 1.437044454s to provisionDockerMachine
	I1202 21:37:55.956204  483106 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:37:55.956218  483106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:37:55.956294  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:37:55.956339  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.980756  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.091648  483106 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:37:56.095210  483106 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:37:56.095237  483106 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:37:56.095243  483106 command_runner.go:130] > VERSION_ID="12"
	I1202 21:37:56.095248  483106 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:37:56.095253  483106 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:37:56.095256  483106 command_runner.go:130] > ID=debian
	I1202 21:37:56.095270  483106 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:37:56.095275  483106 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:37:56.095281  483106 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:37:56.095363  483106 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:37:56.095385  483106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:37:56.095402  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:37:56.095457  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:37:56.095544  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:37:56.095557  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /etc/ssl/certs/4472112.pem
	I1202 21:37:56.095638  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:37:56.095647  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> /etc/test/nested/copy/447211/hosts
	I1202 21:37:56.095696  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:37:56.103392  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:56.120789  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:37:56.138613  483106 start.go:296] duration metric: took 182.392463ms for postStartSetup
	I1202 21:37:56.138692  483106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:37:56.138730  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.156335  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.255560  483106 command_runner.go:130] > 13%
	I1202 21:37:56.256083  483106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:37:56.260264  483106 command_runner.go:130] > 169G
	I1202 21:37:56.260703  483106 fix.go:56] duration metric: took 1.759824513s for fixHost
	I1202 21:37:56.260720  483106 start.go:83] releasing machines lock for "functional-066896", held for 1.759856579s
	I1202 21:37:56.260787  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:56.278034  483106 ssh_runner.go:195] Run: cat /version.json
	I1202 21:37:56.278057  483106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:37:56.278086  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.278126  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.294975  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.296343  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.394339  483106 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:37:56.394533  483106 ssh_runner.go:195] Run: systemctl --version
	I1202 21:37:56.493105  483106 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:37:56.493163  483106 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:37:56.493186  483106 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:37:56.493258  483106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:37:56.530464  483106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:37:56.534763  483106 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:37:56.534813  483106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:37:56.534914  483106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:37:56.542668  483106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:37:56.542693  483106 start.go:496] detecting cgroup driver to use...
	I1202 21:37:56.542754  483106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:37:56.542818  483106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:37:56.557769  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:37:56.570749  483106 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:37:56.570845  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:37:56.586179  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:37:56.599149  483106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:37:56.708191  483106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:37:56.842013  483106 docker.go:234] disabling docker service ...
	I1202 21:37:56.842082  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:37:56.857073  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:37:56.870370  483106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:37:56.987213  483106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:37:57.106635  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:37:57.119596  483106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:37:57.132314  483106 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 21:37:57.133557  483106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:37:57.133663  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.142404  483106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:37:57.142548  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.151265  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.160043  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.168450  483106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:37:57.177232  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.186240  483106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.194528  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.203498  483106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:37:57.209931  483106 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:37:57.210879  483106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:37:57.218360  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.328965  483106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:37:57.485223  483106 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:37:57.485296  483106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:37:57.489286  483106 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 21:37:57.489311  483106 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:37:57.489318  483106 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 21:37:57.489325  483106 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:57.489330  483106 command_runner.go:130] > Access: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489343  483106 command_runner.go:130] > Modify: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489348  483106 command_runner.go:130] > Change: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489352  483106 command_runner.go:130] >  Birth: -
	I1202 21:37:57.489576  483106 start.go:564] Will wait 60s for crictl version
	I1202 21:37:57.489633  483106 ssh_runner.go:195] Run: which crictl
	I1202 21:37:57.495444  483106 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:37:57.495541  483106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:37:57.522065  483106 command_runner.go:130] > Version:  0.1.0
	I1202 21:37:57.522330  483106 command_runner.go:130] > RuntimeName:  cri-o
	I1202 21:37:57.522612  483106 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 21:37:57.522814  483106 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:37:57.525085  483106 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:37:57.525167  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.560503  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.560529  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.560537  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.560542  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.560547  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.560551  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.560555  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.560560  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.560564  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.560568  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.560572  483106 command_runner.go:130] >      static
	I1202 21:37:57.560580  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.560584  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.560589  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.560595  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.560598  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.560603  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.560612  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.560616  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.560620  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.563007  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.589712  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.589787  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.589809  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.589825  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.589855  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.589880  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.589897  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.589914  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.589955  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.589975  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.589991  483106 command_runner.go:130] >      static
	I1202 21:37:57.590007  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.590023  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.590049  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.590069  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.590086  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.590103  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.590120  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.590146  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.590164  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.593809  483106 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:37:57.595025  483106 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:37:57.611773  483106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:37:57.615442  483106 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:37:57.615683  483106 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:37:57.615790  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:57.615841  483106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:37:57.645971  483106 command_runner.go:130] > {
	I1202 21:37:57.645994  483106 command_runner.go:130] >   "images":  [
	I1202 21:37:57.645998  483106 command_runner.go:130] >     {
	I1202 21:37:57.646007  483106 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:37:57.646011  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646017  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:37:57.646020  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646024  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646033  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 21:37:57.646036  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646041  483106 command_runner.go:130] >       "size":  "29035622",
	I1202 21:37:57.646045  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646049  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646052  483106 command_runner.go:130] >     },
	I1202 21:37:57.646054  483106 command_runner.go:130] >     {
	I1202 21:37:57.646060  483106 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:37:57.646068  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646074  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:37:57.646077  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646080  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646088  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 21:37:57.646096  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646101  483106 command_runner.go:130] >       "size":  "74488375",
	I1202 21:37:57.646105  483106 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:37:57.646109  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646112  483106 command_runner.go:130] >     },
	I1202 21:37:57.646115  483106 command_runner.go:130] >     {
	I1202 21:37:57.646121  483106 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:37:57.646124  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646129  483106 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:37:57.646132  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646136  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646147  483106 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 21:37:57.646150  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646157  483106 command_runner.go:130] >       "size":  "60854229",
	I1202 21:37:57.646161  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646165  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646168  483106 command_runner.go:130] >       },
	I1202 21:37:57.646172  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646175  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646178  483106 command_runner.go:130] >     },
	I1202 21:37:57.646181  483106 command_runner.go:130] >     {
	I1202 21:37:57.646187  483106 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:37:57.646191  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646196  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:37:57.646200  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646203  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646211  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 21:37:57.646216  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646220  483106 command_runner.go:130] >       "size":  "84947242",
	I1202 21:37:57.646223  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646227  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646230  483106 command_runner.go:130] >       },
	I1202 21:37:57.646234  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646238  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646241  483106 command_runner.go:130] >     },
	I1202 21:37:57.646243  483106 command_runner.go:130] >     {
	I1202 21:37:57.646250  483106 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:37:57.646253  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646259  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:37:57.646262  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646266  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646274  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 21:37:57.646277  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646285  483106 command_runner.go:130] >       "size":  "72167568",
	I1202 21:37:57.646289  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646292  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646299  483106 command_runner.go:130] >       },
	I1202 21:37:57.646305  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646309  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646313  483106 command_runner.go:130] >     },
	I1202 21:37:57.646316  483106 command_runner.go:130] >     {
	I1202 21:37:57.646322  483106 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:37:57.646326  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646331  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:37:57.646334  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646338  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646345  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 21:37:57.646348  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646352  483106 command_runner.go:130] >       "size":  "74105124",
	I1202 21:37:57.646356  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646360  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646363  483106 command_runner.go:130] >     },
	I1202 21:37:57.646365  483106 command_runner.go:130] >     {
	I1202 21:37:57.646372  483106 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:37:57.646375  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646381  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:37:57.646384  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646387  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646399  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 21:37:57.646403  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646406  483106 command_runner.go:130] >       "size":  "49819792",
	I1202 21:37:57.646409  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646413  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646416  483106 command_runner.go:130] >       },
	I1202 21:37:57.646421  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646424  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646427  483106 command_runner.go:130] >     },
	I1202 21:37:57.646430  483106 command_runner.go:130] >     {
	I1202 21:37:57.646436  483106 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:37:57.646443  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646447  483106 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.646450  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646454  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646461  483106 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 21:37:57.646464  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646468  483106 command_runner.go:130] >       "size":  "517328",
	I1202 21:37:57.646471  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646474  483106 command_runner.go:130] >         "value":  "65535"
	I1202 21:37:57.646477  483106 command_runner.go:130] >       },
	I1202 21:37:57.646481  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646485  483106 command_runner.go:130] >       "pinned":  true
	I1202 21:37:57.646488  483106 command_runner.go:130] >     }
	I1202 21:37:57.646491  483106 command_runner.go:130] >   ]
	I1202 21:37:57.646493  483106 command_runner.go:130] > }
	I1202 21:37:57.648114  483106 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:37:57.648141  483106 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:37:57.648149  483106 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:37:57.648254  483106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:37:57.648333  483106 ssh_runner.go:195] Run: crio config
	I1202 21:37:57.700265  483106 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 21:37:57.700298  483106 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 21:37:57.700306  483106 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 21:37:57.700310  483106 command_runner.go:130] > #
	I1202 21:37:57.700318  483106 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 21:37:57.700324  483106 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 21:37:57.700331  483106 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 21:37:57.700339  483106 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 21:37:57.700343  483106 command_runner.go:130] > # reload'.
	I1202 21:37:57.700350  483106 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 21:37:57.700357  483106 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 21:37:57.700363  483106 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 21:37:57.700373  483106 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 21:37:57.700376  483106 command_runner.go:130] > [crio]
	I1202 21:37:57.700387  483106 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 21:37:57.700395  483106 command_runner.go:130] > # containers images, in this directory.
	I1202 21:37:57.700407  483106 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 21:37:57.700421  483106 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 21:37:57.700427  483106 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 21:37:57.700434  483106 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 21:37:57.700447  483106 command_runner.go:130] > # imagestore = ""
	I1202 21:37:57.700456  483106 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 21:37:57.700462  483106 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 21:37:57.700469  483106 command_runner.go:130] > # storage_driver = "overlay"
	I1202 21:37:57.700475  483106 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 21:37:57.700484  483106 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 21:37:57.700488  483106 command_runner.go:130] > # storage_option = [
	I1202 21:37:57.700493  483106 command_runner.go:130] > # ]
	I1202 21:37:57.700499  483106 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 21:37:57.700508  483106 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 21:37:57.700513  483106 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 21:37:57.700520  483106 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 21:37:57.700528  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 21:37:57.700532  483106 command_runner.go:130] > # always happen on a node reboot
	I1202 21:37:57.700541  483106 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 21:37:57.700555  483106 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 21:37:57.700563  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 21:37:57.700568  483106 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 21:37:57.700573  483106 command_runner.go:130] > # version_file_persist = ""
	I1202 21:37:57.700587  483106 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 21:37:57.700595  483106 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 21:37:57.700603  483106 command_runner.go:130] > # internal_wipe = true
	I1202 21:37:57.700612  483106 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 21:37:57.700617  483106 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 21:37:57.700629  483106 command_runner.go:130] > # internal_repair = true
	I1202 21:37:57.700634  483106 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 21:37:57.700640  483106 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 21:37:57.700650  483106 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 21:37:57.700656  483106 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 21:37:57.700661  483106 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 21:37:57.700667  483106 command_runner.go:130] > [crio.api]
	I1202 21:37:57.700672  483106 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 21:37:57.700677  483106 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 21:37:57.700685  483106 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 21:37:57.700690  483106 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 21:37:57.700699  483106 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 21:37:57.700710  483106 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 21:37:57.700714  483106 command_runner.go:130] > # stream_port = "0"
	I1202 21:37:57.700720  483106 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 21:37:57.700725  483106 command_runner.go:130] > # stream_enable_tls = false
	I1202 21:37:57.700731  483106 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 21:37:57.700954  483106 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 21:37:57.700969  483106 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 21:37:57.700976  483106 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 21:37:57.700981  483106 command_runner.go:130] > # stream_tls_cert = ""
	I1202 21:37:57.700988  483106 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 21:37:57.700994  483106 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 21:37:57.701175  483106 command_runner.go:130] > # stream_tls_key = ""
	I1202 21:37:57.701188  483106 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 21:37:57.701195  483106 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 21:37:57.701200  483106 command_runner.go:130] > # automatically pick up the changes.
	I1202 21:37:57.701204  483106 command_runner.go:130] > # stream_tls_ca = ""
	I1202 21:37:57.701226  483106 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701255  483106 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 21:37:57.701272  483106 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701278  483106 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 21:37:57.701285  483106 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 21:37:57.701296  483106 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 21:37:57.701300  483106 command_runner.go:130] > [crio.runtime]
	I1202 21:37:57.701306  483106 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 21:37:57.701315  483106 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 21:37:57.701318  483106 command_runner.go:130] > # "nofile=1024:2048"
	I1202 21:37:57.701324  483106 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 21:37:57.701328  483106 command_runner.go:130] > # default_ulimits = [
	I1202 21:37:57.701331  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701338  483106 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 21:37:57.701348  483106 command_runner.go:130] > # no_pivot = false
	I1202 21:37:57.701354  483106 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 21:37:57.701360  483106 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 21:37:57.701368  483106 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 21:37:57.701374  483106 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 21:37:57.701385  483106 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 21:37:57.701395  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701399  483106 command_runner.go:130] > # conmon = ""
	I1202 21:37:57.701403  483106 command_runner.go:130] > # Cgroup setting for conmon
	I1202 21:37:57.701410  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 21:37:57.701414  483106 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 21:37:57.701420  483106 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 21:37:57.701425  483106 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 21:37:57.701432  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701438  483106 command_runner.go:130] > # conmon_env = [
	I1202 21:37:57.701441  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701447  483106 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 21:37:57.701459  483106 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 21:37:57.701465  483106 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 21:37:57.701470  483106 command_runner.go:130] > # default_env = [
	I1202 21:37:57.701475  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701481  483106 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 21:37:57.701491  483106 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 21:37:57.701495  483106 command_runner.go:130] > # selinux = false
	I1202 21:37:57.701501  483106 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 21:37:57.701509  483106 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 21:37:57.701516  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701526  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.701533  483106 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 21:37:57.701541  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701545  483106 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 21:37:57.701551  483106 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 21:37:57.701559  483106 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 21:37:57.701566  483106 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 21:37:57.701575  483106 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 21:37:57.701580  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701584  483106 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 21:37:57.701590  483106 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 21:37:57.701595  483106 command_runner.go:130] > # the cgroup blockio controller.
	I1202 21:37:57.701601  483106 command_runner.go:130] > # blockio_config_file = ""
	I1202 21:37:57.701608  483106 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 21:37:57.701614  483106 command_runner.go:130] > # blockio parameters.
	I1202 21:37:57.701618  483106 command_runner.go:130] > # blockio_reload = false
	I1202 21:37:57.701625  483106 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 21:37:57.701628  483106 command_runner.go:130] > # irqbalance daemon.
	I1202 21:37:57.701634  483106 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 21:37:57.701642  483106 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 21:37:57.701649  483106 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 21:37:57.701659  483106 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 21:37:57.701689  483106 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 21:37:57.701703  483106 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 21:37:57.701707  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701711  483106 command_runner.go:130] > # rdt_config_file = ""
	I1202 21:37:57.701717  483106 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 21:37:57.701723  483106 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 21:37:57.701730  483106 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 21:37:57.701736  483106 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 21:37:57.701742  483106 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 21:37:57.701751  483106 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 21:37:57.701755  483106 command_runner.go:130] > # will be added.
	I1202 21:37:57.701763  483106 command_runner.go:130] > # default_capabilities = [
	I1202 21:37:57.701968  483106 command_runner.go:130] > # 	"CHOWN",
	I1202 21:37:57.702017  483106 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 21:37:57.702029  483106 command_runner.go:130] > # 	"FSETID",
	I1202 21:37:57.702033  483106 command_runner.go:130] > # 	"FOWNER",
	I1202 21:37:57.702037  483106 command_runner.go:130] > # 	"SETGID",
	I1202 21:37:57.702040  483106 command_runner.go:130] > # 	"SETUID",
	I1202 21:37:57.702175  483106 command_runner.go:130] > # 	"SETPCAP",
	I1202 21:37:57.702197  483106 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 21:37:57.702202  483106 command_runner.go:130] > # 	"KILL",
	I1202 21:37:57.702205  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702213  483106 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 21:37:57.702220  483106 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 21:37:57.702225  483106 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 21:37:57.702232  483106 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 21:37:57.702247  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702251  483106 command_runner.go:130] > default_sysctls = [
	I1202 21:37:57.702282  483106 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 21:37:57.702290  483106 command_runner.go:130] > ]
	I1202 21:37:57.702302  483106 command_runner.go:130] > # List of devices on the host that a
	I1202 21:37:57.702309  483106 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 21:37:57.702317  483106 command_runner.go:130] > # allowed_devices = [
	I1202 21:37:57.702321  483106 command_runner.go:130] > # 	"/dev/fuse",
	I1202 21:37:57.702326  483106 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 21:37:57.702496  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702509  483106 command_runner.go:130] > # List of additional devices. specified as
	I1202 21:37:57.702523  483106 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 21:37:57.702529  483106 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 21:37:57.702539  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702546  483106 command_runner.go:130] > # additional_devices = [
	I1202 21:37:57.702553  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702559  483106 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 21:37:57.702562  483106 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 21:37:57.702593  483106 command_runner.go:130] > # 	"/etc/cdi",
	I1202 21:37:57.702605  483106 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 21:37:57.702609  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702616  483106 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 21:37:57.702632  483106 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 21:37:57.702636  483106 command_runner.go:130] > # Defaults to false.
	I1202 21:37:57.702641  483106 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 21:37:57.702647  483106 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 21:37:57.702655  483106 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 21:37:57.702659  483106 command_runner.go:130] > # hooks_dir = [
	I1202 21:37:57.702849  483106 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 21:37:57.702860  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702867  483106 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 21:37:57.702879  483106 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 21:37:57.702886  483106 command_runner.go:130] > # its default mounts from the following two files:
	I1202 21:37:57.702893  483106 command_runner.go:130] > #
	I1202 21:37:57.702899  483106 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 21:37:57.702905  483106 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 21:37:57.702911  483106 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 21:37:57.702913  483106 command_runner.go:130] > #
	I1202 21:37:57.702919  483106 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 21:37:57.702925  483106 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 21:37:57.702932  483106 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 21:37:57.702937  483106 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 21:37:57.702942  483106 command_runner.go:130] > #
	I1202 21:37:57.702974  483106 command_runner.go:130] > # default_mounts_file = ""
	I1202 21:37:57.702983  483106 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 21:37:57.702990  483106 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 21:37:57.703009  483106 command_runner.go:130] > # pids_limit = -1
	I1202 21:37:57.703018  483106 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 21:37:57.703024  483106 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 21:37:57.703030  483106 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 21:37:57.703039  483106 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 21:37:57.703043  483106 command_runner.go:130] > # log_size_max = -1
	I1202 21:37:57.703053  483106 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 21:37:57.703070  483106 command_runner.go:130] > # log_to_journald = false
	I1202 21:37:57.703082  483106 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 21:37:57.703090  483106 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 21:37:57.703102  483106 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 21:37:57.703112  483106 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 21:37:57.703121  483106 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 21:37:57.703294  483106 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 21:37:57.703314  483106 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 21:37:57.703388  483106 command_runner.go:130] > # read_only = false
	I1202 21:37:57.703403  483106 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 21:37:57.703410  483106 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 21:37:57.703414  483106 command_runner.go:130] > # live configuration reload.
	I1202 21:37:57.703418  483106 command_runner.go:130] > # log_level = "info"
	I1202 21:37:57.703429  483106 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 21:37:57.703434  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.703441  483106 command_runner.go:130] > # log_filter = ""
	I1202 21:37:57.703448  483106 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703456  483106 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 21:37:57.703459  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703467  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703471  483106 command_runner.go:130] > # uid_mappings = ""
	I1202 21:37:57.703477  483106 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703489  483106 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 21:37:57.703492  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703500  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703504  483106 command_runner.go:130] > # gid_mappings = ""
	I1202 21:37:57.703510  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 21:37:57.703518  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703524  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703532  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703561  483106 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 21:37:57.703582  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 21:37:57.703590  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703596  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703606  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703769  483106 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 21:37:57.703787  483106 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 21:37:57.703803  483106 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 21:37:57.703810  483106 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 21:37:57.703970  483106 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 21:37:57.703985  483106 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 21:37:57.703996  483106 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 21:37:57.704002  483106 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 21:37:57.704010  483106 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 21:37:57.704013  483106 command_runner.go:130] > # drop_infra_ctr = true
	I1202 21:37:57.704023  483106 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 21:37:57.704035  483106 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 21:37:57.704043  483106 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 21:37:57.704046  483106 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 21:37:57.704053  483106 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 21:37:57.704059  483106 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 21:37:57.704066  483106 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 21:37:57.704073  483106 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 21:37:57.704077  483106 command_runner.go:130] > # shared_cpuset = ""
	I1202 21:37:57.704088  483106 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 21:37:57.704094  483106 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 21:37:57.704098  483106 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 21:37:57.704111  483106 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 21:37:57.704115  483106 command_runner.go:130] > # pinns_path = ""
	I1202 21:37:57.704126  483106 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 21:37:57.704133  483106 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 21:37:57.704159  483106 command_runner.go:130] > # enable_criu_support = true
	I1202 21:37:57.704170  483106 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 21:37:57.704177  483106 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 21:37:57.704281  483106 command_runner.go:130] > # enable_pod_events = false
	I1202 21:37:57.704302  483106 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 21:37:57.704308  483106 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 21:37:57.704428  483106 command_runner.go:130] > # default_runtime = "crun"
	I1202 21:37:57.704441  483106 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 21:37:57.704455  483106 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 21:37:57.704470  483106 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 21:37:57.704476  483106 command_runner.go:130] > # creation as a file is not desired either.
	I1202 21:37:57.704485  483106 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 21:37:57.704501  483106 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 21:37:57.704506  483106 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 21:37:57.704638  483106 command_runner.go:130] > # ]
	I1202 21:37:57.704649  483106 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 21:37:57.704656  483106 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 21:37:57.704663  483106 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 21:37:57.704668  483106 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 21:37:57.704671  483106 command_runner.go:130] > #
	I1202 21:37:57.704676  483106 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 21:37:57.704681  483106 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 21:37:57.704688  483106 command_runner.go:130] > # runtime_type = "oci"
	I1202 21:37:57.704693  483106 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 21:37:57.704697  483106 command_runner.go:130] > # inherit_default_runtime = false
	I1202 21:37:57.704710  483106 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 21:37:57.704715  483106 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 21:37:57.704720  483106 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 21:37:57.704728  483106 command_runner.go:130] > # monitor_env = []
	I1202 21:37:57.704733  483106 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 21:37:57.704737  483106 command_runner.go:130] > # allowed_annotations = []
	I1202 21:37:57.704743  483106 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 21:37:57.704749  483106 command_runner.go:130] > # no_sync_log = false
	I1202 21:37:57.704753  483106 command_runner.go:130] > # default_annotations = {}
	I1202 21:37:57.704757  483106 command_runner.go:130] > # stream_websockets = false
	I1202 21:37:57.704761  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.704791  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.704803  483106 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 21:37:57.704810  483106 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 21:37:57.704816  483106 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 21:37:57.704822  483106 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 21:37:57.704828  483106 command_runner.go:130] > #   in $PATH.
	I1202 21:37:57.704835  483106 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 21:37:57.704844  483106 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 21:37:57.704850  483106 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 21:37:57.704853  483106 command_runner.go:130] > #   state.
	I1202 21:37:57.704859  483106 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 21:37:57.704870  483106 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 21:37:57.704879  483106 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 21:37:57.704885  483106 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 21:37:57.704891  483106 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 21:37:57.704899  483106 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 21:37:57.704907  483106 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 21:37:57.704917  483106 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 21:37:57.704923  483106 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 21:37:57.704931  483106 command_runner.go:130] > #   The currently recognized values are:
	I1202 21:37:57.704940  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 21:37:57.704947  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 21:37:57.704954  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 21:37:57.704962  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 21:37:57.704969  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 21:37:57.704978  483106 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 21:37:57.704985  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 21:37:57.704992  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 21:37:57.705001  483106 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 21:37:57.705008  483106 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 21:37:57.705017  483106 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 21:37:57.705023  483106 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 21:37:57.705029  483106 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 21:37:57.705035  483106 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 21:37:57.705045  483106 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 21:37:57.705054  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 21:37:57.705068  483106 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 21:37:57.705072  483106 command_runner.go:130] > #   deprecated option "conmon".
	I1202 21:37:57.705080  483106 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 21:37:57.705088  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 21:37:57.705095  483106 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 21:37:57.705101  483106 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 21:37:57.705108  483106 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 21:37:57.705113  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 21:37:57.705129  483106 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 21:37:57.705135  483106 command_runner.go:130] > #   conmon-rs by using:
	I1202 21:37:57.705143  483106 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 21:37:57.705154  483106 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 21:37:57.705165  483106 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 21:37:57.705176  483106 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 21:37:57.705183  483106 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 21:37:57.705191  483106 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 21:37:57.705198  483106 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 21:37:57.705203  483106 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 21:37:57.705214  483106 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 21:37:57.705222  483106 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 21:37:57.705228  483106 command_runner.go:130] > #   when a machine crash happens.
	I1202 21:37:57.705235  483106 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 21:37:57.705243  483106 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 21:37:57.705253  483106 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 21:37:57.705257  483106 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 21:37:57.705263  483106 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 21:37:57.705273  483106 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 21:37:57.705275  483106 command_runner.go:130] > #
	I1202 21:37:57.705280  483106 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 21:37:57.705285  483106 command_runner.go:130] > #
	I1202 21:37:57.705292  483106 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 21:37:57.705301  483106 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 21:37:57.705304  483106 command_runner.go:130] > #
	I1202 21:37:57.705310  483106 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 21:37:57.705317  483106 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 21:37:57.705322  483106 command_runner.go:130] > #
	I1202 21:37:57.705328  483106 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 21:37:57.705331  483106 command_runner.go:130] > # feature.
	I1202 21:37:57.705336  483106 command_runner.go:130] > #
	I1202 21:37:57.705342  483106 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 21:37:57.705350  483106 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 21:37:57.705360  483106 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 21:37:57.705367  483106 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 21:37:57.705375  483106 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 21:37:57.705382  483106 command_runner.go:130] > #
	I1202 21:37:57.705388  483106 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 21:37:57.705397  483106 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 21:37:57.705399  483106 command_runner.go:130] > #
	I1202 21:37:57.705405  483106 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 21:37:57.705411  483106 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 21:37:57.705416  483106 command_runner.go:130] > #
	I1202 21:37:57.705422  483106 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 21:37:57.705428  483106 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 21:37:57.705433  483106 command_runner.go:130] > # limitation.
	I1202 21:37:57.705469  483106 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 21:37:57.705480  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 21:37:57.705484  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705488  483106 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 21:37:57.705492  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705499  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705503  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705510  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705514  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705518  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705521  483106 command_runner.go:130] > allowed_annotations = [
	I1202 21:37:57.705734  483106 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 21:37:57.705745  483106 command_runner.go:130] > ]
	I1202 21:37:57.705770  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705779  483106 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 21:37:57.705849  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 21:37:57.705872  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705883  483106 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 21:37:57.705901  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705906  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705910  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705915  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705921  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705925  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705929  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705937  483106 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 21:37:57.705944  483106 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 21:37:57.705965  483106 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 21:37:57.705974  483106 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 21:37:57.705985  483106 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 21:37:57.706000  483106 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 21:37:57.706009  483106 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 21:37:57.706015  483106 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 21:37:57.706025  483106 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 21:37:57.706051  483106 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 21:37:57.706057  483106 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 21:37:57.706077  483106 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 21:37:57.706082  483106 command_runner.go:130] > # Example:
	I1202 21:37:57.706087  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 21:37:57.706091  483106 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 21:37:57.706096  483106 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 21:37:57.706102  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 21:37:57.706105  483106 command_runner.go:130] > # cpuset = "0-1"
	I1202 21:37:57.706108  483106 command_runner.go:130] > # cpushares = "5"
	I1202 21:37:57.706112  483106 command_runner.go:130] > # cpuquota = "1000"
	I1202 21:37:57.706116  483106 command_runner.go:130] > # cpuperiod = "100000"
	I1202 21:37:57.706120  483106 command_runner.go:130] > # cpulimit = "35"
	I1202 21:37:57.706126  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.706131  483106 command_runner.go:130] > # The workload name is workload-type.
	I1202 21:37:57.706143  483106 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 21:37:57.706160  483106 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 21:37:57.706180  483106 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 21:37:57.706189  483106 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 21:37:57.706195  483106 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 21:37:57.706229  483106 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 21:37:57.706243  483106 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 21:37:57.706247  483106 command_runner.go:130] > # Default value is set to true
	I1202 21:37:57.706253  483106 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 21:37:57.706261  483106 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 21:37:57.706266  483106 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 21:37:57.706271  483106 command_runner.go:130] > # Default value is set to 'false'
	I1202 21:37:57.706275  483106 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 21:37:57.706280  483106 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 21:37:57.706291  483106 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 21:37:57.706299  483106 command_runner.go:130] > # timezone = ""
	I1202 21:37:57.706306  483106 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 21:37:57.706308  483106 command_runner.go:130] > #
	I1202 21:37:57.706315  483106 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 21:37:57.706326  483106 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 21:37:57.706329  483106 command_runner.go:130] > [crio.image]
	I1202 21:37:57.706338  483106 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 21:37:57.706348  483106 command_runner.go:130] > # default_transport = "docker://"
	I1202 21:37:57.706354  483106 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 21:37:57.706360  483106 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706497  483106 command_runner.go:130] > # global_auth_file = ""
	I1202 21:37:57.706512  483106 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 21:37:57.706518  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706617  483106 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.706659  483106 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 21:37:57.706671  483106 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706677  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706682  483106 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 21:37:57.706688  483106 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 21:37:57.706698  483106 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 21:37:57.706714  483106 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 21:37:57.706730  483106 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 21:37:57.706734  483106 command_runner.go:130] > # pause_command = "/pause"
	I1202 21:37:57.706749  483106 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 21:37:57.706756  483106 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 21:37:57.706771  483106 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 21:37:57.706777  483106 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 21:37:57.706783  483106 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 21:37:57.706791  483106 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 21:37:57.706795  483106 command_runner.go:130] > # pinned_images = [
	I1202 21:37:57.706798  483106 command_runner.go:130] > # ]
	I1202 21:37:57.706806  483106 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 21:37:57.706813  483106 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 21:37:57.706822  483106 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 21:37:57.706828  483106 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 21:37:57.706834  483106 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 21:37:57.707022  483106 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 21:37:57.707046  483106 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 21:37:57.707056  483106 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 21:37:57.707066  483106 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 21:37:57.707073  483106 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 21:37:57.707084  483106 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 21:37:57.707105  483106 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 21:37:57.707129  483106 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 21:37:57.707141  483106 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 21:37:57.707146  483106 command_runner.go:130] > # changing them here.
	I1202 21:37:57.707158  483106 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 21:37:57.707163  483106 command_runner.go:130] > # insecure_registries = [
	I1202 21:37:57.707278  483106 command_runner.go:130] > # ]
	I1202 21:37:57.707303  483106 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 21:37:57.707309  483106 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 21:37:57.707323  483106 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 21:37:57.707334  483106 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 21:37:57.707518  483106 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 21:37:57.707543  483106 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 21:37:57.707551  483106 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 21:37:57.707565  483106 command_runner.go:130] > # auto_reload_registries = false
	I1202 21:37:57.707577  483106 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 21:37:57.707586  483106 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 21:37:57.707593  483106 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 21:37:57.707601  483106 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 21:37:57.707626  483106 command_runner.go:130] > # The mode of short name resolution.
	I1202 21:37:57.707639  483106 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 21:37:57.707646  483106 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 21:37:57.707652  483106 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 21:37:57.707737  483106 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 21:37:57.707776  483106 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 21:37:57.707797  483106 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 21:37:57.707804  483106 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 21:37:57.707810  483106 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 21:37:57.707814  483106 command_runner.go:130] > # CNI plugins.
	I1202 21:37:57.707818  483106 command_runner.go:130] > [crio.network]
	I1202 21:37:57.707825  483106 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 21:37:57.707834  483106 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 21:37:57.707838  483106 command_runner.go:130] > # cni_default_network = ""
	I1202 21:37:57.707843  483106 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 21:37:57.707880  483106 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 21:37:57.707894  483106 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 21:37:57.707898  483106 command_runner.go:130] > # plugin_dirs = [
	I1202 21:37:57.708100  483106 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 21:37:57.708328  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708337  483106 command_runner.go:130] > # List of included pod metrics.
	I1202 21:37:57.708504  483106 command_runner.go:130] > # included_pod_metrics = [
	I1202 21:37:57.708692  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708716  483106 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 21:37:57.708721  483106 command_runner.go:130] > [crio.metrics]
	I1202 21:37:57.708725  483106 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 21:37:57.709042  483106 command_runner.go:130] > # enable_metrics = false
	I1202 21:37:57.709050  483106 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 21:37:57.709056  483106 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 21:37:57.709063  483106 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 21:37:57.709070  483106 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 21:37:57.709082  483106 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 21:37:57.709226  483106 command_runner.go:130] > # metrics_collectors = [
	I1202 21:37:57.709424  483106 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 21:37:57.709616  483106 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 21:37:57.709807  483106 command_runner.go:130] > # 	"containers_oom_total",
	I1202 21:37:57.709999  483106 command_runner.go:130] > # 	"processes_defunct",
	I1202 21:37:57.710186  483106 command_runner.go:130] > # 	"operations_total",
	I1202 21:37:57.710377  483106 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 21:37:57.710569  483106 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 21:37:57.710759  483106 command_runner.go:130] > # 	"operations_errors_total",
	I1202 21:37:57.710953  483106 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 21:37:57.711154  483106 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 21:37:57.711347  483106 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 21:37:57.711541  483106 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 21:37:57.711734  483106 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 21:37:57.711929  483106 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 21:37:57.712114  483106 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 21:37:57.712326  483106 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 21:37:57.712521  483106 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 21:37:57.712708  483106 command_runner.go:130] > # ]
	I1202 21:37:57.712718  483106 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 21:37:57.713101  483106 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 21:37:57.713111  483106 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 21:37:57.713462  483106 command_runner.go:130] > # metrics_port = 9090
	I1202 21:37:57.713472  483106 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 21:37:57.713766  483106 command_runner.go:130] > # metrics_socket = ""
	I1202 21:37:57.713798  483106 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 21:37:57.713843  483106 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 21:37:57.713867  483106 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 21:37:57.713890  483106 command_runner.go:130] > # certificate on any modification event.
	I1202 21:37:57.714026  483106 command_runner.go:130] > # metrics_cert = ""
	I1202 21:37:57.714049  483106 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 21:37:57.714055  483106 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 21:37:57.714333  483106 command_runner.go:130] > # metrics_key = ""
	I1202 21:37:57.714367  483106 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 21:37:57.714411  483106 command_runner.go:130] > [crio.tracing]
	I1202 21:37:57.714434  483106 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 21:37:57.714690  483106 command_runner.go:130] > # enable_tracing = false
	I1202 21:37:57.714730  483106 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 21:37:57.715040  483106 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 21:37:57.715074  483106 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 21:37:57.715400  483106 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 21:37:57.715424  483106 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 21:37:57.715465  483106 command_runner.go:130] > [crio.nri]
	I1202 21:37:57.715486  483106 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 21:37:57.715706  483106 command_runner.go:130] > # enable_nri = true
	I1202 21:37:57.715731  483106 command_runner.go:130] > # NRI socket to listen on.
	I1202 21:37:57.716042  483106 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 21:37:57.716072  483106 command_runner.go:130] > # NRI plugin directory to use.
	I1202 21:37:57.716381  483106 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 21:37:57.716412  483106 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 21:37:57.716702  483106 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 21:37:57.716734  483106 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 21:37:57.716910  483106 command_runner.go:130] > # nri_disable_connections = false
	I1202 21:37:57.716983  483106 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 21:37:57.717007  483106 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 21:37:57.717025  483106 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 21:37:57.717040  483106 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 21:37:57.717084  483106 command_runner.go:130] > # NRI default validator configuration.
	I1202 21:37:57.717109  483106 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 21:37:57.717127  483106 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 21:37:57.717180  483106 command_runner.go:130] > # can be restricted/rejected:
	I1202 21:37:57.717207  483106 command_runner.go:130] > # - OCI hook injection
	I1202 21:37:57.717238  483106 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 21:37:57.717387  483106 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 21:37:57.717408  483106 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 21:37:57.717448  483106 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 21:37:57.717469  483106 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 21:37:57.717489  483106 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 21:37:57.717520  483106 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 21:37:57.717542  483106 command_runner.go:130] > #
	I1202 21:37:57.717559  483106 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 21:37:57.717588  483106 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 21:37:57.717614  483106 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 21:37:57.717634  483106 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 21:37:57.717673  483106 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 21:37:57.717700  483106 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 21:37:57.717721  483106 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 21:37:57.717750  483106 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 21:37:57.717775  483106 command_runner.go:130] > # ]
	I1202 21:37:57.717791  483106 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 21:37:57.717809  483106 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 21:37:57.717844  483106 command_runner.go:130] > [crio.stats]
	I1202 21:37:57.717862  483106 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 21:37:57.717880  483106 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 21:37:57.717896  483106 command_runner.go:130] > # stats_collection_period = 0
	I1202 21:37:57.717933  483106 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 21:37:57.717955  483106 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 21:37:57.717969  483106 command_runner.go:130] > # collection_period = 0
	I1202 21:37:57.719581  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.679996811Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 21:37:57.719602  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680035195Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 21:37:57.719612  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680068245Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 21:37:57.719634  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680094978Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 21:37:57.719650  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680175192Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.719661  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680551245Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 21:37:57.719673  483106 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 21:37:57.719793  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:57.719806  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:57.719822  483106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:37:57.719854  483106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:37:57.719977  483106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:37:57.720050  483106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:37:57.727128  483106 command_runner.go:130] > kubeadm
	I1202 21:37:57.727200  483106 command_runner.go:130] > kubectl
	I1202 21:37:57.727217  483106 command_runner.go:130] > kubelet
	I1202 21:37:57.727679  483106 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:37:57.727758  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:37:57.735128  483106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:37:57.747401  483106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:37:57.759635  483106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 21:37:57.772168  483106 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:37:57.775704  483106 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:37:57.775781  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.892482  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:58.414394  483106 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:37:58.414415  483106 certs.go:195] generating shared ca certs ...
	I1202 21:37:58.414431  483106 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:58.414617  483106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:37:58.414690  483106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:37:58.414702  483106 certs.go:257] generating profile certs ...
	I1202 21:37:58.414822  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:37:58.414884  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:37:58.414927  483106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:37:58.414939  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:37:58.414953  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:37:58.414964  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:37:58.414980  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:37:58.414991  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:37:58.415019  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:37:58.415030  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:37:58.415042  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:37:58.415094  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:37:58.415127  483106 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:37:58.415140  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:37:58.415171  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:37:58.415199  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:37:58.415223  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:37:58.415279  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:58.415327  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.415344  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem -> /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.415358  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.415948  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:37:58.434575  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:37:58.454217  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:37:58.476636  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:37:58.499852  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:37:58.517799  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:37:58.537626  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:37:58.556051  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:37:58.573621  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:37:58.591561  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:37:58.609240  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:37:58.626214  483106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:37:58.638898  483106 ssh_runner.go:195] Run: openssl version
	I1202 21:37:58.644941  483106 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:37:58.645379  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:37:58.653758  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657242  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657279  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657350  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.697450  483106 command_runner.go:130] > b5213941
	I1202 21:37:58.697880  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:37:58.705830  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:37:58.714550  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718238  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718320  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718390  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.760939  483106 command_runner.go:130] > 51391683
	I1202 21:37:58.761409  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:37:58.769112  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:37:58.777300  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780878  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780914  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780988  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.821311  483106 command_runner.go:130] > 3ec20f2e
	I1202 21:37:58.821773  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:37:58.829482  483106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833099  483106 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833249  483106 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:37:58.833277  483106 command_runner.go:130] > Device: 259,1	Inode: 1309045     Links: 1
	I1202 21:37:58.833296  483106 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:58.833318  483106 command_runner.go:130] > Access: 2025-12-02 21:33:51.106313964 +0000
	I1202 21:37:58.833335  483106 command_runner.go:130] > Modify: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833354  483106 command_runner.go:130] > Change: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833368  483106 command_runner.go:130] >  Birth: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833452  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:37:58.873701  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.874162  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:37:58.914810  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.915281  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:37:58.957479  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.957884  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:37:58.998366  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.998755  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:37:59.041919  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.042032  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:37:59.082406  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.082849  483106 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:59.082947  483106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:37:59.083063  483106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:37:59.109816  483106 cri.go:89] found id: ""
	I1202 21:37:59.109903  483106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:37:59.116871  483106 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:37:59.116937  483106 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:37:59.116958  483106 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:37:59.117791  483106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:37:59.117835  483106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:37:59.117913  483106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:37:59.125060  483106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:37:59.125506  483106 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.125617  483106 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-444114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-066896" cluster setting kubeconfig missing "functional-066896" context setting]
	I1202 21:37:59.125900  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.126337  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.126509  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.127095  483106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:37:59.127116  483106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:37:59.127122  483106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:37:59.127127  483106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:37:59.127133  483106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:37:59.127170  483106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:37:59.127484  483106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:37:59.134957  483106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:37:59.134991  483106 kubeadm.go:602] duration metric: took 17.137902ms to restartPrimaryControlPlane
	I1202 21:37:59.135014  483106 kubeadm.go:403] duration metric: took 52.172876ms to StartCluster
	I1202 21:37:59.135029  483106 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135086  483106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.135727  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135915  483106 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:37:59.136175  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:59.136232  483106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:37:59.136325  483106 addons.go:70] Setting storage-provisioner=true in profile "functional-066896"
	I1202 21:37:59.136339  483106 addons.go:239] Setting addon storage-provisioner=true in "functional-066896"
	I1202 21:37:59.136375  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.136437  483106 addons.go:70] Setting default-storageclass=true in profile "functional-066896"
	I1202 21:37:59.136458  483106 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-066896"
	I1202 21:37:59.136761  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.136798  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.139277  483106 out.go:179] * Verifying Kubernetes components...
	I1202 21:37:59.140771  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:59.165976  483106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:37:59.168845  483106 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.168870  483106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:37:59.168937  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.175656  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.176018  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.176385  483106 addons.go:239] Setting addon default-storageclass=true in "functional-066896"
	I1202 21:37:59.176428  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.176909  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.211203  483106 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:37:59.211229  483106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:37:59.211311  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.225207  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.248989  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.349954  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:59.407494  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.408663  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.165713  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165766  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165797  483106 retry.go:31] will retry after 202.822033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165873  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165889  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165899  483106 retry.go:31] will retry after 281.773783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.166009  483106 node_ready.go:35] waiting up to 6m0s for node "functional-066896" to be "Ready" ...
	I1202 21:38:00.166135  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.166200  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.368900  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.441989  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.442041  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.442063  483106 retry.go:31] will retry after 393.334545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.448331  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.512520  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.512571  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.512592  483106 retry.go:31] will retry after 493.57139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.666814  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.667270  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.835693  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.896509  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.896567  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.896588  483106 retry.go:31] will retry after 517.359335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.006926  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.069882  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.069952  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.069980  483106 retry.go:31] will retry after 823.867865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.167068  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.167622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.415018  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:01.473591  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.473646  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.473665  483106 retry.go:31] will retry after 817.290744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.666990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.667103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.894929  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.964144  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.967581  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.967615  483106 retry.go:31] will retry after 586.961084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.167465  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:02.167512  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:02.292000  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:02.348780  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.352211  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.352246  483106 retry.go:31] will retry after 1.098539896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.555610  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:02.616881  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.616985  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.617011  483106 retry.go:31] will retry after 1.090026315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.667191  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.667272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.667575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.451026  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:03.515404  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.515439  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.515458  483106 retry.go:31] will retry after 2.58724354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.666944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.667328  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.707632  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:03.776872  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.776924  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.776953  483106 retry.go:31] will retry after 972.290717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.166626  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.166706  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:04.666777  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.666867  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.667243  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:04.667303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:04.749460  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:04.810694  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:04.810734  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.810752  483106 retry.go:31] will retry after 3.951899284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:05.166161  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.166235  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.166558  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:05.666140  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.666212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.102988  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:06.161220  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:06.161263  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.161284  483106 retry.go:31] will retry after 3.838527337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.166366  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.166444  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.666314  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.666386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:07.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.166299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:07.166671  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:07.666338  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.666425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.666777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.166503  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.166606  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.166933  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.666295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.666603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.763053  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:08.821648  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:08.821701  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:08.821721  483106 retry.go:31] will retry after 4.430309202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:09.166538  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.166615  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.166964  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:09.167037  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:09.666806  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.666904  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.667263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.001423  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:10.065960  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:10.069561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.069595  483106 retry.go:31] will retry after 4.835447081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.166750  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.166827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.167127  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.666978  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.667076  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.667385  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:11.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.167266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.167557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:11.167608  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:11.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.666586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.166242  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.167025  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.167092  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.167359  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.252779  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:13.311539  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:13.314561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.314593  483106 retry.go:31] will retry after 7.77807994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.667097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.667178  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.667555  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:13.667614  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:14.166435  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.166532  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.166857  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.666157  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.666230  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.906038  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:14.963486  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:14.966545  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:14.966583  483106 retry.go:31] will retry after 9.105443561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:15.166926  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.167018  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.167368  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:15.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.666221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.666564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:16.166892  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.167321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:16.167385  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:16.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.667311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.667666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.166271  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.166345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.166811  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.666246  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.666576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.166665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.666398  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.666474  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.666809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:18.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:19.167020  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.167103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.167423  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:19.666169  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.666247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.166216  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.166296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.166641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.666328  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.666687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:21.093408  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:21.149979  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:21.153644  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.153677  483106 retry.go:31] will retry after 11.903983297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.166790  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.167199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:21.167253  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:21.666923  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.667013  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.667352  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.166588  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.166661  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.166957  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.666921  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.667250  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:23.167035  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.167114  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.167459  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:23.167514  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:23.666741  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.666815  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.072876  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:24.134664  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:24.134721  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.134742  483106 retry.go:31] will retry after 11.08333461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.166922  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.166990  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.167311  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.667038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.667366  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.167335  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.667220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.667299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.667607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:25.667651  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:26.166305  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.166387  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.166780  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:26.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.666584  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.166286  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.166358  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.666223  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.666297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.666627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:28.166866  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.166938  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:28.167314  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:28.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.667185  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.166534  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.166912  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.666220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.666294  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.666610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.166321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.166409  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:30.666751  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:31.166158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.166232  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.166500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:31.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.666300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.166362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.666462  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:32.666785  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:33.058732  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:33.133401  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:33.133437  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.133456  483106 retry.go:31] will retry after 7.836153133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.166617  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.167044  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:33.666857  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.666928  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.166841  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.166919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.666992  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.667107  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.667433  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:34.667486  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:35.166145  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.166224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.166561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:35.218798  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:35.277107  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:35.277160  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.277179  483106 retry.go:31] will retry after 18.212486347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.666236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.666575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.166236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.166653  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.666345  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.666418  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.666776  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:37.166874  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.166942  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.167236  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:37.167279  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:37.667058  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.667144  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.167192  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.167270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.167629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.166835  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.166911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.167230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.667062  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.667449  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:39.667503  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:40.166787  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.666946  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.667046  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.667374  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.969813  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:41.027522  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:41.030695  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.030727  483106 retry.go:31] will retry after 26.445141412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.167017  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.167086  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.167412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:41.667158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.667226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.667538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:41.667593  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:42.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:42.666412  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.666487  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.666864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.167082  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.167382  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.667222  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.667290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.667605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:43.667663  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:44.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.167048  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:44.666563  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.666635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.666906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.166291  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.666557  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.666637  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.666980  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:46.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.166248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.166526  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:46.166568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:46.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.666372  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.166454  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.166529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.166849  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.667114  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.667196  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.667500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:48.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.166278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.166598  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:48.166644  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:48.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.166918  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.166985  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.167265  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.667124  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:50.167148  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:50.167600  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:50.666859  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.666941  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.667348  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.166149  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.666321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.666742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.167091  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.167502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.666630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:52.666682  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:53.166365  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.166440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.166743  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:53.490393  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:53.549126  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:53.552379  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.552413  483106 retry.go:31] will retry after 28.270272942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.666480  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.166899  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.166977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.167310  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.667106  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.667183  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.667452  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:54.667501  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:55.166711  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.166784  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.167096  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:55.666915  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.666986  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.667321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.167141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.167212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.167527  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:57.166258  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:57.166735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:57.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.167097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.167360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.667123  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.667203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.667560  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:59.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.166930  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:59.166985  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:59.666233  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.666305  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.166345  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.166424  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.166735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.666605  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.666696  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.667071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:01.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.166920  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.167258  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:01.167303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:01.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.667514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.166308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.666901  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.666977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.667267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:03.167047  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.167126  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.167463  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:03.167519  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:03.667138  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.667208  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.667536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.166363  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.166711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.666264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.666699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.166480  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.166807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.666607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:05.666654  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:06.166221  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:06.666253  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.666658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.166933  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.167016  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.167275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.476950  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:07.537734  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:07.540988  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.541021  483106 retry.go:31] will retry after 43.142584555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.666246  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:07.666721  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:08.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.166806  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:08.666497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.666831  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.167081  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.167424  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.666233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:10.166170  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.166240  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.166510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:10.166560  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:10.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.166219  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.166624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.666147  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.666484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:12.166223  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.166293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.166617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:12.166680  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:12.666258  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.167106  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.167177  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.167479  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.666184  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.666262  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:14.166400  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.166473  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:14.166879  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:14.666975  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.667061  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.667380  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.167173  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.167254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.167549  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.666659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.166211  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.166592  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.666667  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:17.166399  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.166790  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:17.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.166356  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.166694  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.666433  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.666858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:18.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:19.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.167267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:19.667092  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.667166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.667486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.166275  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.166627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.666762  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.666831  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.667148  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:20.667207  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:21.166923  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.167030  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.167353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.667178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.667576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.822959  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:39:21.878670  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878722  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878822  483106 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:22.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.167188  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:22.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.666649  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:23.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.166385  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.166692  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:23.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:23.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.166668  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.166744  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.167080  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.666918  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.667347  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:25.166732  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.166798  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.167094  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:25.167141  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:25.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.167051  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.167153  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.167485  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.666270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.166562  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.666353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:27.666775  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:28.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.166268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:28.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.666620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.166566  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.166638  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.166966  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.666571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:30.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:30.166748  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:30.666468  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.666548  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.666896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.166188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.166269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.166537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:32.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.166483  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.166797  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:32.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:32.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.666570  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.166360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.666426  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.666501  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.666838  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:34.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.166641  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.166906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:34.166954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:34.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.166396  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.667133  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.667396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:36.167160  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.167234  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.167571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:36.167629  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:36.666296  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.666373  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.167008  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.167074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.167365  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.667188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.667263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.667557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.166608  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.666617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:38.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:39.166799  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.166866  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.167214  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:39.666873  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.666945  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.166544  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.666645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:40.666705  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:41.166392  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.166467  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:41.667109  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.667193  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.667456  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.166704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.666430  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.666507  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.666850  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:42.666912  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:43.166126  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.166198  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.166502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:43.666218  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.166582  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.166676  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.167019  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.666769  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.666837  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:44.667165  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:45.167137  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.167219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.167616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:45.666336  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.666407  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.666753  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.166918  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.666991  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.667084  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.667426  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:46.667487  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:47.166178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.166572  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:47.666176  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.666257  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.666519  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.166237  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.166320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.666685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:49.166761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.167141  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:49.167190  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:49.667042  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.667119  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.667437  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.166247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.666738  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.666823  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.667106  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.684445  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:50.752913  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.752959  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.753053  483106 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:50.754872  483106 out.go:179] * Enabled addons: 
	I1202 21:39:50.756298  483106 addons.go:530] duration metric: took 1m51.620061888s for enable addons: enabled=[]
	I1202 21:39:51.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.166426  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.166756  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:51.666472  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.666542  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:51.666948  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:52.167023  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.167094  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:52.666886  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.666958  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.667302  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.167134  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.167525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.667191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.667443  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:53.667482  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:54.166580  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.166653  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:54.666761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.666832  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.667157  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.166643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:56.166424  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.166496  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:56.166886  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:56.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.166233  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.166658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.666359  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.666436  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.666730  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.166410  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.166495  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.166819  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.666570  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.666669  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:58.667176  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:59.166497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.166577  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:59.667069  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.667455  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.166967  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.666805  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.666883  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.667412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:00.667479  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:01.166600  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.166671  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.167071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:01.666865  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.666943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.667324  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.167126  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.167206  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.167585  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.666525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:03.166226  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.166298  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.166603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:03.166657  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:03.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.166563  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:05.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.166503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.166802  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:05.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:05.666560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.666632  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.666917  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.166784  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.166862  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.167188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.666980  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.667073  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.667410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:07.167168  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.167242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.167577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:07.167637  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:07.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.166347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.666848  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.666917  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.667201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.167192  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.167533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:09.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:10.166218  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.166297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.166630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:10.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.166230  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.166652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.666139  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.666209  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:12.166254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:12.166731  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:12.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.166445  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.166702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:14.166684  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.166770  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.167156  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:14.167223  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:14.666896  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.667255  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.167098  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.167589  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.666317  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.666392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:16.166898  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.166964  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.167280  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:16.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:16.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.667212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.667594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.166183  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.666227  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.666643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.166363  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.166741  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.666473  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.666544  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:18.666946  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:19.166811  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.166894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.167197  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:19.667052  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.667494  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.166251  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.666278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.666536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:21.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:21.166718  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:21.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.166154  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.166236  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.166525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.666654  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:23.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.166350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.166696  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:23.166758  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:23.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.166423  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.166514  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.166938  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.666591  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.666926  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.166167  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.666268  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:25.666738  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:26.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.166386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.166758  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:26.667125  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.667482  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.166187  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.166601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.666179  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.666248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:28.166873  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.166943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.167276  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:28.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:28.667149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.667219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.667624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.166678  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.167031  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.666202  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.166296  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.166722  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.666438  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.666516  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.666818  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:30.666863  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:31.167130  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.167203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.167472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:31.666847  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.666919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.167093  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.167163  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.167483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.666708  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.666786  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.667188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:32.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:33.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.167053  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.167388  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:33.666150  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.666225  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.666552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.166580  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:35.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.166672  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:35.166733  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:35.667033  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.667102  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.167161  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.166552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.666698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:37.666757  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:38.166422  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.166500  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:38.666194  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.666265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.167095  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.666974  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.667318  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:39.667375  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:40.167120  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.167543  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:40.666231  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.166425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.166750  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.666605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:42.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.166619  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:42.167094  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:42.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.666923  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.167057  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.167134  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.167398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.667173  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.667599  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.166501  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.166575  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.166892  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.666149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.666222  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.666488  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:44.666529  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:45.166301  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.166394  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.166815  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:45.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.666688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.166383  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.166726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.666288  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.666390  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.666823  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:46.666883  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:47.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:47.666906  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.666980  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.667259  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.167539  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:49.166560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.166634  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:49.166951  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:49.666759  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.666827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.667195  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.167180  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.167561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.166662  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.666376  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.666454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.666782  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:51.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:52.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.166277  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:52.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.666260  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.166242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.166586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:54.166666  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.166740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.167107  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:54.167169  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:54.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.667066  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.667453  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.166768  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.166843  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.167212  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.667075  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.667147  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.166196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.666907  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.666978  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.667341  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:56.667400  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:57.167105  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.167182  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.167548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:57.666151  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.666224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.666574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:59.166616  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.166687  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.167061  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:59.167133  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:59.666436  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.666763  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.166433  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.166775  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.666772  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.666864  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.667256  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.166511  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.166588  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.166874  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.666242  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.666312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.666652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:01.666713  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:02.166240  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:02.666821  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.667219  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.167019  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.167098  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.167404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.667108  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.667179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.667509  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:03.667571  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:04.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.166539  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:04.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.666387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.666456  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:06.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:06.166736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:06.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.166352  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.166429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.166638  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:08.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:09.166897  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.167350  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:09.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.667559  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.166198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.166610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:11.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.166812  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:11.166864  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:11.667095  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.667159  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.167205  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.167279  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.167635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.666734  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.166554  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.666237  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:13.666743  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:14.166756  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.166839  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.167224  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:14.666384  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.666452  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.666765  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.166506  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.166604  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.666880  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.666953  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.667301  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:15.667360  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:16.167103  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.167186  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.167467  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:16.666185  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.666259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.666581  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.166400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.166698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.666368  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.666435  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.666759  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:18.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.166336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:18.166712  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:18.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.666316  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.166992  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.666925  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.667275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:20.167102  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.167179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.167552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:20.167610  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.166282  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.166361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.166713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.666428  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.666878  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.166118  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.166189  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.166472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.666186  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.666263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:22.666636  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:23.166387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:23.666524  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.666616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.666974  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.166861  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.166944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.167295  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.667130  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.667205  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.667569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:24.667625  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:25.166285  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.166367  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.166640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:25.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.166431  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.166504  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.166839  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.666268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:27.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:27.166741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:27.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.166370  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.166448  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.666614  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:29.166581  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.166988  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:29.167064  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:29.666310  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.666379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.166344  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.666407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.666837  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.166591  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.666262  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.666700  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:31.666773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:32.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.166666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:32.666931  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.667021  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.167169  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.666354  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:34.166448  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.166521  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.166778  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:34.166817  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:34.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.166518  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.166928  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.666213  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.666489  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.166173  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.166587  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.666706  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:36.666759  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:37.166409  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.166748  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:37.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.666371  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.166380  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.666156  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.666498  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:39.166531  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.166607  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.166922  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:39.166975  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:39.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.666360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.166383  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.166661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.666295  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.666709  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.166407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.166482  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.166800  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.666481  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.666552  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.666826  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:41.666867  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:42.166504  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.166597  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.167020  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:42.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.166575  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.166655  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.166923  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.666265  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:44.166680  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.166751  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.167102  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:44.167158  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:44.666373  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.666442  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.666712  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.166323  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.166419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.166904  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:46.166971  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.167358  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:46.167415  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:46.667180  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.667573  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.166353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.166671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.666144  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.666220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.166246  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.166328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.666285  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.666616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:48.666674  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:49.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.166829  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.167114  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:49.666912  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.667008  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.667343  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.167265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.167597  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.667199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:50.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:51.167085  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.167158  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.167484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:51.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.666588  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.166288  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.166576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.666279  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:53.166266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.166682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:53.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:53.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.666538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.166529  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.167128  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.666895  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.666973  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.667337  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:55.167110  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.167191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.167497  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:55.167547  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:55.666230  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.666304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.166312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.666335  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.666403  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.666666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.166298  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.166382  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.166769  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.666462  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.666534  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.666859  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:57.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:58.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:58.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.167410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.666199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.666594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:00.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.166428  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:00.166812  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:00.666486  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.666567  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.666939  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.166699  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.166771  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.167072  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.666854  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.666927  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.667287  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:02.166943  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.167041  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.167384  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:02.167439  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:02.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.667496  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.166171  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.166536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.666647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.166621  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.166972  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.666792  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.666871  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.667225  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:04.667298  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:05.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.167164  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:05.666753  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.666818  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.166887  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.167288  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.667059  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:06.667427  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:07.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.166958  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:07.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.666703  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.166359  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.666293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:09.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.166681  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:09.167077  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:09.666840  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.666912  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.667238  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.166509  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.166582  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.166858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.166742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.666945  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.667031  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.667356  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:11.667420  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:12.167101  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:12.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.666600  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.166981  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.167068  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.667199  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.667286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.667642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:13.667698  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:14.166489  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.166888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:14.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.666551  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.166321  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.166657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.666366  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.666440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.666760  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:16.166141  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.166215  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.166468  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:16.166510  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:16.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.166374  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.166457  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.166761  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.666442  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.666512  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.666821  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:18.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.166772  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:18.166836  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:18.666274  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.166855  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.166933  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.167216  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.667039  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.667360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:20.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.167228  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.167569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:20.167623  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:20.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.666348  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.666615  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.166625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.166193  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.166517  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.666240  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.666319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.666635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:22.666694  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:23.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.166714  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:23.666152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.666229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.166488  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:24.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:25.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.166557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:25.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.666628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.166354  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.166432  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.166768  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.666459  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.666527  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.666814  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:26.666855  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:27.166267  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.166728  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:27.666682  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.666756  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.667083  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.166832  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.166910  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.167202  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.667022  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.667097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:28.667472  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:29.166585  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.166986  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:29.666238  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.666306  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.166349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.166687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.666416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.667129  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:31.166416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.166493  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:31.166799  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:31.666451  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.666540  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.666886  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.166679  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.167040  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.166343  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.166414  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.666470  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.666546  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:33.666954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:34.166602  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.166668  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.166925  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:34.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.666642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.166346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.166669  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.666238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:36.166255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:36.166744  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:36.666417  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.666492  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.666845  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.166502  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.166593  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.166951  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.666782  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.666857  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.667204  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:38.167040  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.167135  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.167508  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:38.167570  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:38.666773  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.666845  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.167094  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.167166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.167513  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.667211  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.667304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.667685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.166206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:40.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:41.166208  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.166634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:41.666331  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.666404  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.166257  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:42.666736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:43.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.166460  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.166745  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:43.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.166962  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.666626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:45.166336  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.166423  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.166767  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:45.166816  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:45.666819  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.666897  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.667261  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.166500  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.166583  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.166847  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:47.166414  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.166497  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:47.166838  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:47.666485  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.666557  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.666832  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.166343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.666684  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:49.166554  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.166635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.166960  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:49.167054  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:49.666877  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.666951  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.167131  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.167578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.666932  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.667019  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.667326  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:51.167186  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.167276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.167691  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:51.167754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:51.666431  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.666825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.166160  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.166241  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.166511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.166466  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.166825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.667187  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.667483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:53.667539  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:54.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.166598  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.166946  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:54.666794  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.666869  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.166549  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.166809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.666671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:56.166359  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.166777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:56.166834  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:56.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.166224  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.166303  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.166628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.166503  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:58.666661  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:59.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.167155  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:59.666449  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.666515  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.166309  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.166395  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.666575  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.666682  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.667068  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:00.667126  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:01.166853  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.167038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.167371  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:01.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.667265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.667601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.166238  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.166322  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.666979  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.667074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.667353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:02.667401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:03.167145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.167221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.167567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.666326  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.666639  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.166767  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.667023  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.667100  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.667434  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:04.667488  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:05.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.166259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.166604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:05.666866  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.666932  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.167087  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.167170  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.167507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.666273  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.666702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:07.166389  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.166454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.166729  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:07.166773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:07.666440  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.666529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.666861  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.166628  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.166712  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.167093  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.666822  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.666890  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.667183  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:09.167074  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.167152  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.167512  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:09.167567  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:09.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.666352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.666710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.166961  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.167396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.666160  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.166637  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.666393  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.666463  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.666766  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:11.666808  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:12.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.166645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:12.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.666717  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.166302  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.166710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.666374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:14.166633  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.166711  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.167091  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:14.167149  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:14.666871  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.666946  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.667269  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.167061  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.167138  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.167476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.666203  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.666281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.666622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.166164  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.166245  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.166507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.666216  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:17.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.166577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:17.666191  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.666256  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.666511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.166212  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.166315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.166633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.666248  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:19.166505  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.166576  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.166870  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:19.166918  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:19.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.666567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.166357  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.666369  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.666443  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.666785  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:21.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:22.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.166561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.166824  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:22.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.666368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.166281  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.166368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.166699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.666210  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.666283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:24.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.166660  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.167035  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:24.167111  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:24.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.667230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.166928  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.167024  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.667147  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.667223  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.667622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.166295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.666243  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.666504  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:26.666554  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:27.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.166660  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:27.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.166197  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.166524  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.666680  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:28.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:29.166765  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.166840  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.167165  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:29.666897  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.167174  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.167271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.167625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.666334  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.666419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.666807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:30.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:31.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.167536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:31.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.166351  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.666287  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.666548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:33.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:33.166706  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:33.666243  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.166799  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.666282  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.666375  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.666726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.166319  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.166392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:35.666568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:36.166250  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.166626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:36.666324  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.666401  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.666725  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.166908  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.166975  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.667118  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.667398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:37.667447  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:38.166151  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.166226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.166528  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:38.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.666633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.166754  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.167075  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.666637  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.666714  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.667049  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:40.166341  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.166420  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.166681  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:40.166728  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:40.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.666455  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.666787  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.666356  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.666429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:42.166327  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.166411  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.166822  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:42.166896  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:42.666589  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.666665  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.667015  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.166747  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.166812  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.167088  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.666863  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.666934  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.667289  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:44.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.166981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.167339  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:44.167397  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:44.666667  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.666740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.667046  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.166921  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.167029  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.666175  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.666253  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.666621  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.166254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.166514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:46.666754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:47.166451  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.166864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:47.667182  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.667255  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.667579  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.666341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:49.166748  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.166817  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:49.167250  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:49.666922  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.667010  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.166155  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.666900  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.667180  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:51.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.167345  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:51.167391  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:51.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.667233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.667577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.166264  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.666171  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.666249  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.666529  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:53.666576  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:54.166567  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.166645  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:54.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.667510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.166542  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:55.666707  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.166311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.166642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:56.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.666282  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.167073  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.167151  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.167546  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.666340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:57.666741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:58.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:58.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.666328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.666634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:44:00.169272  483106 type.go:168] "Request Body" body=""
	W1202 21:44:00.169401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 21:44:00.169464  483106 node_ready.go:38] duration metric: took 6m0.003439328s for node "functional-066896" to be "Ready" ...
	I1202 21:44:00.175124  483106 out.go:203] 
	W1202 21:44:00.178380  483106 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:44:00.178413  483106 out.go:285] * 
	* 
	W1202 21:44:00.180645  483106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:44:00.185151  483106 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-066896 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m7.034245938s for "functional-066896" cluster.
I1202 21:44:01.036200  447211 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (363.596058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 logs -n 25: (1.02597129s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount3 --alsologtostderr -v=1                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh            │ functional-218190 ssh findmnt -T /mount1                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh findmnt -T /mount2                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh findmnt -T /mount3                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount          │ -p functional-218190 --kill=true                                                                                                                  │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service list                                                                                                                    │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service        │ functional-218190 service list -o json                                                                                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start          │ -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service --namespace=default --https --url hello-node                                                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-218190 --alsologtostderr -v=1                                                                                    │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service        │ functional-218190 service hello-node --url --format={{.IP}}                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service hello-node --url                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format short --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image          │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete         │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start          │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start          │ -p functional-066896 --alsologtostderr -v=8                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:37 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:37:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:37:54.052280  483106 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:37:54.052518  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052549  483106 out.go:374] Setting ErrFile to fd 2...
	I1202 21:37:54.052570  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052830  483106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:37:54.053229  483106 out.go:368] Setting JSON to false
	I1202 21:37:54.054096  483106 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12002,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:37:54.054239  483106 start.go:143] virtualization:  
	I1202 21:37:54.055968  483106 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:37:54.057216  483106 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:37:54.057305  483106 notify.go:221] Checking for updates...
	I1202 21:37:54.059409  483106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:37:54.060390  483106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:54.061474  483106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:37:54.062609  483106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:37:54.063772  483106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:37:54.065317  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:54.065458  483106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:37:54.087852  483106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:37:54.087968  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.157300  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.14827719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.157407  483106 docker.go:319] overlay module found
	I1202 21:37:54.158855  483106 out.go:179] * Using the docker driver based on existing profile
	I1202 21:37:54.160356  483106 start.go:309] selected driver: docker
	I1202 21:37:54.160374  483106 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.160477  483106 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:37:54.160570  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.221500  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.212376823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.221914  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:54.221982  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:54.222036  483106 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.223816  483106 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:37:54.224907  483106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:37:54.226134  483106 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:37:54.227415  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:54.227490  483106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:37:54.247414  483106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:37:54.247439  483106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:37:54.295322  483106 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:37:54.500334  483106 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:37:54.500536  483106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:37:54.500574  483106 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500673  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:37:54.500684  483106 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.936µs
	I1202 21:37:54.500698  483106 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:37:54.500710  483106 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500741  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:37:54.500746  483106 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.194µs
	I1202 21:37:54.500752  483106 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500761  483106 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500788  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:37:54.500788  483106 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:37:54.500792  483106 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.492µs
	I1202 21:37:54.500799  483106 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500809  483106 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500816  483106 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500852  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:37:54.500856  483106 start.go:364] duration metric: took 26.462µs to acquireMachinesLock for "functional-066896"
	I1202 21:37:54.500858  483106 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.838µs
	I1202 21:37:54.500864  483106 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500869  483106 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:37:54.500875  483106 fix.go:54] fixHost starting: 
	I1202 21:37:54.500873  483106 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500901  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:37:54.500905  483106 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.15µs
	I1202 21:37:54.500919  483106 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500928  483106 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500951  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:37:54.500956  483106 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.833µs
	I1202 21:37:54.500961  483106 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:37:54.500970  483106 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500994  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:37:54.500998  483106 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.391µs
	I1202 21:37:54.501003  483106 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:37:54.501011  483106 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.501036  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:37:54.501040  483106 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.097µs
	I1202 21:37:54.501046  483106 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:37:54.501065  483106 cache.go:87] Successfully saved all images to host disk.
	I1202 21:37:54.501197  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:54.517471  483106 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:37:54.517510  483106 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:37:54.519079  483106 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:37:54.519117  483106 machine.go:94] provisionDockerMachine start ...
	I1202 21:37:54.519205  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.536086  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.536422  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.536437  483106 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:37:54.686523  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.686547  483106 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:37:54.686612  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.710674  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.710988  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.711037  483106 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:37:54.868253  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.868331  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.886749  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.887092  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.887115  483106 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:37:55.036431  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:37:55.036522  483106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:37:55.036593  483106 ubuntu.go:190] setting up certificates
	I1202 21:37:55.036621  483106 provision.go:84] configureAuth start
	I1202 21:37:55.036718  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:55.055483  483106 provision.go:143] copyHostCerts
	I1202 21:37:55.055534  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055575  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:37:55.055589  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055670  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:37:55.055775  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055797  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:37:55.055803  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055836  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:37:55.055880  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055901  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:37:55.055908  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055941  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:37:55.055998  483106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:37:55.445716  483106 provision.go:177] copyRemoteCerts
	I1202 21:37:55.445788  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:37:55.445829  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.462295  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:55.566646  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:37:55.566707  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:37:55.584230  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:37:55.584339  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:37:55.601138  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:37:55.601197  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:37:55.619092  483106 provision.go:87] duration metric: took 582.43702ms to configureAuth
	I1202 21:37:55.619117  483106 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:37:55.619308  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:55.619413  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.637231  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:55.637559  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:55.637573  483106 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:37:55.956144  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:37:55.956170  483106 machine.go:97] duration metric: took 1.437044454s to provisionDockerMachine
	I1202 21:37:55.956204  483106 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:37:55.956218  483106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:37:55.956294  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:37:55.956339  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.980756  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.091648  483106 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:37:56.095210  483106 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:37:56.095237  483106 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:37:56.095243  483106 command_runner.go:130] > VERSION_ID="12"
	I1202 21:37:56.095248  483106 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:37:56.095253  483106 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:37:56.095256  483106 command_runner.go:130] > ID=debian
	I1202 21:37:56.095270  483106 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:37:56.095275  483106 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:37:56.095281  483106 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:37:56.095363  483106 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:37:56.095385  483106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:37:56.095402  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:37:56.095457  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:37:56.095544  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:37:56.095557  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /etc/ssl/certs/4472112.pem
	I1202 21:37:56.095638  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:37:56.095647  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> /etc/test/nested/copy/447211/hosts
	I1202 21:37:56.095696  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:37:56.103392  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:56.120789  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:37:56.138613  483106 start.go:296] duration metric: took 182.392463ms for postStartSetup
	I1202 21:37:56.138692  483106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:37:56.138730  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.156335  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.255560  483106 command_runner.go:130] > 13%
	I1202 21:37:56.256083  483106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:37:56.260264  483106 command_runner.go:130] > 169G
	I1202 21:37:56.260703  483106 fix.go:56] duration metric: took 1.759824513s for fixHost
	I1202 21:37:56.260720  483106 start.go:83] releasing machines lock for "functional-066896", held for 1.759856579s
	I1202 21:37:56.260787  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:56.278034  483106 ssh_runner.go:195] Run: cat /version.json
	I1202 21:37:56.278057  483106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:37:56.278086  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.278126  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.294975  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.296343  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.394339  483106 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:37:56.394533  483106 ssh_runner.go:195] Run: systemctl --version
	I1202 21:37:56.493105  483106 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:37:56.493163  483106 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:37:56.493186  483106 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:37:56.493258  483106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:37:56.530464  483106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:37:56.534763  483106 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:37:56.534813  483106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:37:56.534914  483106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:37:56.542668  483106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:37:56.542693  483106 start.go:496] detecting cgroup driver to use...
	I1202 21:37:56.542754  483106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:37:56.542818  483106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:37:56.557769  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:37:56.570749  483106 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:37:56.570845  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:37:56.586179  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:37:56.599149  483106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:37:56.708191  483106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:37:56.842013  483106 docker.go:234] disabling docker service ...
	I1202 21:37:56.842082  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:37:56.857073  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:37:56.870370  483106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:37:56.987213  483106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:37:57.106635  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:37:57.119596  483106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:37:57.132314  483106 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 21:37:57.133557  483106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:37:57.133663  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.142404  483106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:37:57.142548  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.151265  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.160043  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.168450  483106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:37:57.177232  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.186240  483106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.194528  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.203498  483106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:37:57.209931  483106 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:37:57.210879  483106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:37:57.218360  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.328965  483106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:37:57.485223  483106 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:37:57.485296  483106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:37:57.489286  483106 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 21:37:57.489311  483106 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:37:57.489318  483106 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 21:37:57.489325  483106 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:57.489330  483106 command_runner.go:130] > Access: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489343  483106 command_runner.go:130] > Modify: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489348  483106 command_runner.go:130] > Change: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489352  483106 command_runner.go:130] >  Birth: -
	I1202 21:37:57.489576  483106 start.go:564] Will wait 60s for crictl version
	I1202 21:37:57.489633  483106 ssh_runner.go:195] Run: which crictl
	I1202 21:37:57.495444  483106 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:37:57.495541  483106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:37:57.522065  483106 command_runner.go:130] > Version:  0.1.0
	I1202 21:37:57.522330  483106 command_runner.go:130] > RuntimeName:  cri-o
	I1202 21:37:57.522612  483106 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 21:37:57.522814  483106 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:37:57.525085  483106 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:37:57.525167  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.560503  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.560529  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.560537  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.560542  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.560547  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.560551  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.560555  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.560560  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.560564  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.560568  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.560572  483106 command_runner.go:130] >      static
	I1202 21:37:57.560580  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.560584  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.560589  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.560595  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.560598  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.560603  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.560612  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.560616  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.560620  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.563007  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.589712  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.589787  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.589809  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.589825  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.589855  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.589880  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.589897  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.589914  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.589955  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.589975  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.589991  483106 command_runner.go:130] >      static
	I1202 21:37:57.590007  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.590023  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.590049  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.590069  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.590086  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.590103  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.590120  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.590146  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.590164  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.593809  483106 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:37:57.595025  483106 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:37:57.611773  483106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:37:57.615442  483106 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:37:57.615683  483106 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:37:57.615790  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:57.615841  483106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:37:57.645971  483106 command_runner.go:130] > {
	I1202 21:37:57.645994  483106 command_runner.go:130] >   "images":  [
	I1202 21:37:57.645998  483106 command_runner.go:130] >     {
	I1202 21:37:57.646007  483106 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:37:57.646011  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646017  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:37:57.646020  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646024  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646033  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 21:37:57.646036  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646041  483106 command_runner.go:130] >       "size":  "29035622",
	I1202 21:37:57.646045  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646049  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646052  483106 command_runner.go:130] >     },
	I1202 21:37:57.646054  483106 command_runner.go:130] >     {
	I1202 21:37:57.646060  483106 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:37:57.646068  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646074  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:37:57.646077  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646080  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646088  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 21:37:57.646096  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646101  483106 command_runner.go:130] >       "size":  "74488375",
	I1202 21:37:57.646105  483106 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:37:57.646109  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646112  483106 command_runner.go:130] >     },
	I1202 21:37:57.646115  483106 command_runner.go:130] >     {
	I1202 21:37:57.646121  483106 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:37:57.646124  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646129  483106 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:37:57.646132  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646136  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646147  483106 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 21:37:57.646150  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646157  483106 command_runner.go:130] >       "size":  "60854229",
	I1202 21:37:57.646161  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646165  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646168  483106 command_runner.go:130] >       },
	I1202 21:37:57.646172  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646175  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646178  483106 command_runner.go:130] >     },
	I1202 21:37:57.646181  483106 command_runner.go:130] >     {
	I1202 21:37:57.646187  483106 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:37:57.646191  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646196  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:37:57.646200  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646203  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646211  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 21:37:57.646216  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646220  483106 command_runner.go:130] >       "size":  "84947242",
	I1202 21:37:57.646223  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646227  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646230  483106 command_runner.go:130] >       },
	I1202 21:37:57.646234  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646238  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646241  483106 command_runner.go:130] >     },
	I1202 21:37:57.646243  483106 command_runner.go:130] >     {
	I1202 21:37:57.646250  483106 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:37:57.646253  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646259  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:37:57.646262  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646266  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646274  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 21:37:57.646277  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646285  483106 command_runner.go:130] >       "size":  "72167568",
	I1202 21:37:57.646289  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646292  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646299  483106 command_runner.go:130] >       },
	I1202 21:37:57.646305  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646309  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646313  483106 command_runner.go:130] >     },
	I1202 21:37:57.646316  483106 command_runner.go:130] >     {
	I1202 21:37:57.646322  483106 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:37:57.646326  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646331  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:37:57.646334  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646338  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646345  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 21:37:57.646348  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646352  483106 command_runner.go:130] >       "size":  "74105124",
	I1202 21:37:57.646356  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646360  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646363  483106 command_runner.go:130] >     },
	I1202 21:37:57.646365  483106 command_runner.go:130] >     {
	I1202 21:37:57.646372  483106 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:37:57.646375  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646381  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:37:57.646384  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646387  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646399  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 21:37:57.646403  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646406  483106 command_runner.go:130] >       "size":  "49819792",
	I1202 21:37:57.646409  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646413  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646416  483106 command_runner.go:130] >       },
	I1202 21:37:57.646421  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646424  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646427  483106 command_runner.go:130] >     },
	I1202 21:37:57.646430  483106 command_runner.go:130] >     {
	I1202 21:37:57.646436  483106 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:37:57.646443  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646447  483106 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.646450  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646454  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646461  483106 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 21:37:57.646464  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646468  483106 command_runner.go:130] >       "size":  "517328",
	I1202 21:37:57.646471  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646474  483106 command_runner.go:130] >         "value":  "65535"
	I1202 21:37:57.646477  483106 command_runner.go:130] >       },
	I1202 21:37:57.646481  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646485  483106 command_runner.go:130] >       "pinned":  true
	I1202 21:37:57.646488  483106 command_runner.go:130] >     }
	I1202 21:37:57.646491  483106 command_runner.go:130] >   ]
	I1202 21:37:57.646493  483106 command_runner.go:130] > }
	I1202 21:37:57.648114  483106 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:37:57.648141  483106 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:37:57.648149  483106 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:37:57.648254  483106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:37:57.648333  483106 ssh_runner.go:195] Run: crio config
	I1202 21:37:57.700265  483106 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 21:37:57.700298  483106 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 21:37:57.700306  483106 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 21:37:57.700310  483106 command_runner.go:130] > #
	I1202 21:37:57.700318  483106 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 21:37:57.700324  483106 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 21:37:57.700331  483106 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 21:37:57.700339  483106 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 21:37:57.700343  483106 command_runner.go:130] > # reload'.
	I1202 21:37:57.700350  483106 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 21:37:57.700357  483106 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 21:37:57.700363  483106 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 21:37:57.700373  483106 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 21:37:57.700376  483106 command_runner.go:130] > [crio]
	I1202 21:37:57.700387  483106 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 21:37:57.700395  483106 command_runner.go:130] > # containers images, in this directory.
	I1202 21:37:57.700407  483106 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 21:37:57.700421  483106 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 21:37:57.700427  483106 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 21:37:57.700434  483106 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 21:37:57.700447  483106 command_runner.go:130] > # imagestore = ""
	I1202 21:37:57.700456  483106 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 21:37:57.700462  483106 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 21:37:57.700469  483106 command_runner.go:130] > # storage_driver = "overlay"
	I1202 21:37:57.700475  483106 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 21:37:57.700484  483106 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 21:37:57.700488  483106 command_runner.go:130] > # storage_option = [
	I1202 21:37:57.700493  483106 command_runner.go:130] > # ]
	I1202 21:37:57.700499  483106 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 21:37:57.700508  483106 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 21:37:57.700513  483106 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 21:37:57.700520  483106 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 21:37:57.700528  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 21:37:57.700532  483106 command_runner.go:130] > # always happen on a node reboot
	I1202 21:37:57.700541  483106 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 21:37:57.700555  483106 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 21:37:57.700563  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 21:37:57.700568  483106 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 21:37:57.700573  483106 command_runner.go:130] > # version_file_persist = ""
	I1202 21:37:57.700587  483106 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 21:37:57.700595  483106 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 21:37:57.700603  483106 command_runner.go:130] > # internal_wipe = true
	I1202 21:37:57.700612  483106 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 21:37:57.700617  483106 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 21:37:57.700629  483106 command_runner.go:130] > # internal_repair = true
	I1202 21:37:57.700634  483106 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 21:37:57.700640  483106 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 21:37:57.700650  483106 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 21:37:57.700656  483106 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 21:37:57.700661  483106 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 21:37:57.700667  483106 command_runner.go:130] > [crio.api]
	I1202 21:37:57.700672  483106 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 21:37:57.700677  483106 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 21:37:57.700685  483106 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 21:37:57.700690  483106 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 21:37:57.700699  483106 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 21:37:57.700710  483106 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 21:37:57.700714  483106 command_runner.go:130] > # stream_port = "0"
	I1202 21:37:57.700720  483106 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 21:37:57.700725  483106 command_runner.go:130] > # stream_enable_tls = false
	I1202 21:37:57.700731  483106 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 21:37:57.700954  483106 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 21:37:57.700969  483106 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 21:37:57.700976  483106 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 21:37:57.700981  483106 command_runner.go:130] > # stream_tls_cert = ""
	I1202 21:37:57.700988  483106 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 21:37:57.700994  483106 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 21:37:57.701175  483106 command_runner.go:130] > # stream_tls_key = ""
	I1202 21:37:57.701188  483106 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 21:37:57.701195  483106 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 21:37:57.701200  483106 command_runner.go:130] > # automatically pick up the changes.
	I1202 21:37:57.701204  483106 command_runner.go:130] > # stream_tls_ca = ""
	I1202 21:37:57.701226  483106 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701255  483106 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 21:37:57.701272  483106 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701278  483106 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 21:37:57.701285  483106 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 21:37:57.701296  483106 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 21:37:57.701300  483106 command_runner.go:130] > [crio.runtime]
	I1202 21:37:57.701306  483106 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 21:37:57.701315  483106 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 21:37:57.701318  483106 command_runner.go:130] > # "nofile=1024:2048"
	I1202 21:37:57.701324  483106 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 21:37:57.701328  483106 command_runner.go:130] > # default_ulimits = [
	I1202 21:37:57.701331  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701338  483106 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 21:37:57.701348  483106 command_runner.go:130] > # no_pivot = false
	I1202 21:37:57.701354  483106 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 21:37:57.701360  483106 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 21:37:57.701368  483106 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 21:37:57.701374  483106 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 21:37:57.701385  483106 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 21:37:57.701395  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701399  483106 command_runner.go:130] > # conmon = ""
	I1202 21:37:57.701403  483106 command_runner.go:130] > # Cgroup setting for conmon
	I1202 21:37:57.701410  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 21:37:57.701414  483106 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 21:37:57.701420  483106 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 21:37:57.701425  483106 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 21:37:57.701432  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701438  483106 command_runner.go:130] > # conmon_env = [
	I1202 21:37:57.701441  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701447  483106 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 21:37:57.701459  483106 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 21:37:57.701465  483106 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 21:37:57.701470  483106 command_runner.go:130] > # default_env = [
	I1202 21:37:57.701475  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701481  483106 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 21:37:57.701491  483106 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 21:37:57.701495  483106 command_runner.go:130] > # selinux = false
	I1202 21:37:57.701501  483106 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 21:37:57.701509  483106 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 21:37:57.701516  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701526  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.701533  483106 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 21:37:57.701541  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701545  483106 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 21:37:57.701551  483106 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 21:37:57.701559  483106 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 21:37:57.701566  483106 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 21:37:57.701575  483106 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 21:37:57.701580  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701584  483106 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 21:37:57.701590  483106 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 21:37:57.701595  483106 command_runner.go:130] > # the cgroup blockio controller.
	I1202 21:37:57.701601  483106 command_runner.go:130] > # blockio_config_file = ""
	I1202 21:37:57.701608  483106 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 21:37:57.701614  483106 command_runner.go:130] > # blockio parameters.
	I1202 21:37:57.701618  483106 command_runner.go:130] > # blockio_reload = false
	I1202 21:37:57.701625  483106 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 21:37:57.701628  483106 command_runner.go:130] > # irqbalance daemon.
	I1202 21:37:57.701634  483106 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 21:37:57.701642  483106 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 21:37:57.701649  483106 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 21:37:57.701659  483106 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 21:37:57.701689  483106 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 21:37:57.701703  483106 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 21:37:57.701707  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701711  483106 command_runner.go:130] > # rdt_config_file = ""
	I1202 21:37:57.701717  483106 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 21:37:57.701723  483106 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 21:37:57.701730  483106 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 21:37:57.701736  483106 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 21:37:57.701742  483106 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 21:37:57.701751  483106 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 21:37:57.701755  483106 command_runner.go:130] > # will be added.
	I1202 21:37:57.701763  483106 command_runner.go:130] > # default_capabilities = [
	I1202 21:37:57.701968  483106 command_runner.go:130] > # 	"CHOWN",
	I1202 21:37:57.702017  483106 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 21:37:57.702029  483106 command_runner.go:130] > # 	"FSETID",
	I1202 21:37:57.702033  483106 command_runner.go:130] > # 	"FOWNER",
	I1202 21:37:57.702037  483106 command_runner.go:130] > # 	"SETGID",
	I1202 21:37:57.702040  483106 command_runner.go:130] > # 	"SETUID",
	I1202 21:37:57.702175  483106 command_runner.go:130] > # 	"SETPCAP",
	I1202 21:37:57.702197  483106 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 21:37:57.702202  483106 command_runner.go:130] > # 	"KILL",
	I1202 21:37:57.702205  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702213  483106 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 21:37:57.702220  483106 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 21:37:57.702225  483106 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 21:37:57.702232  483106 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 21:37:57.702247  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702251  483106 command_runner.go:130] > default_sysctls = [
	I1202 21:37:57.702282  483106 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 21:37:57.702290  483106 command_runner.go:130] > ]
	I1202 21:37:57.702302  483106 command_runner.go:130] > # List of devices on the host that a
	I1202 21:37:57.702309  483106 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 21:37:57.702317  483106 command_runner.go:130] > # allowed_devices = [
	I1202 21:37:57.702321  483106 command_runner.go:130] > # 	"/dev/fuse",
	I1202 21:37:57.702326  483106 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 21:37:57.702496  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702509  483106 command_runner.go:130] > # List of additional devices. specified as
	I1202 21:37:57.702523  483106 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 21:37:57.702529  483106 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 21:37:57.702539  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702546  483106 command_runner.go:130] > # additional_devices = [
	I1202 21:37:57.702553  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702559  483106 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 21:37:57.702562  483106 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 21:37:57.702593  483106 command_runner.go:130] > # 	"/etc/cdi",
	I1202 21:37:57.702605  483106 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 21:37:57.702609  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702616  483106 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 21:37:57.702632  483106 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 21:37:57.702636  483106 command_runner.go:130] > # Defaults to false.
	I1202 21:37:57.702641  483106 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 21:37:57.702647  483106 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 21:37:57.702655  483106 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 21:37:57.702659  483106 command_runner.go:130] > # hooks_dir = [
	I1202 21:37:57.702849  483106 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 21:37:57.702860  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702867  483106 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 21:37:57.702879  483106 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 21:37:57.702886  483106 command_runner.go:130] > # its default mounts from the following two files:
	I1202 21:37:57.702893  483106 command_runner.go:130] > #
	I1202 21:37:57.702899  483106 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 21:37:57.702905  483106 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 21:37:57.702911  483106 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 21:37:57.702913  483106 command_runner.go:130] > #
	I1202 21:37:57.702919  483106 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 21:37:57.702925  483106 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 21:37:57.702932  483106 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 21:37:57.702937  483106 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 21:37:57.702942  483106 command_runner.go:130] > #
	I1202 21:37:57.702974  483106 command_runner.go:130] > # default_mounts_file = ""
	I1202 21:37:57.702983  483106 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 21:37:57.702990  483106 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 21:37:57.703009  483106 command_runner.go:130] > # pids_limit = -1
	I1202 21:37:57.703018  483106 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 21:37:57.703024  483106 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 21:37:57.703030  483106 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 21:37:57.703039  483106 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 21:37:57.703043  483106 command_runner.go:130] > # log_size_max = -1
	I1202 21:37:57.703053  483106 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 21:37:57.703070  483106 command_runner.go:130] > # log_to_journald = false
	I1202 21:37:57.703082  483106 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 21:37:57.703090  483106 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 21:37:57.703102  483106 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 21:37:57.703112  483106 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 21:37:57.703121  483106 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 21:37:57.703294  483106 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 21:37:57.703314  483106 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 21:37:57.703388  483106 command_runner.go:130] > # read_only = false
	I1202 21:37:57.703403  483106 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 21:37:57.703410  483106 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 21:37:57.703414  483106 command_runner.go:130] > # live configuration reload.
	I1202 21:37:57.703418  483106 command_runner.go:130] > # log_level = "info"
	I1202 21:37:57.703429  483106 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 21:37:57.703434  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.703441  483106 command_runner.go:130] > # log_filter = ""
	I1202 21:37:57.703448  483106 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703456  483106 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 21:37:57.703459  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703467  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703471  483106 command_runner.go:130] > # uid_mappings = ""
	I1202 21:37:57.703477  483106 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703489  483106 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 21:37:57.703492  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703500  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703504  483106 command_runner.go:130] > # gid_mappings = ""
	I1202 21:37:57.703510  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 21:37:57.703518  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703524  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703532  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703561  483106 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 21:37:57.703582  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 21:37:57.703590  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703596  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703606  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703769  483106 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 21:37:57.703787  483106 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 21:37:57.703803  483106 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 21:37:57.703810  483106 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 21:37:57.703970  483106 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 21:37:57.703985  483106 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 21:37:57.703996  483106 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 21:37:57.704002  483106 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 21:37:57.704010  483106 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 21:37:57.704013  483106 command_runner.go:130] > # drop_infra_ctr = true
	I1202 21:37:57.704023  483106 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 21:37:57.704035  483106 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 21:37:57.704043  483106 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 21:37:57.704046  483106 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 21:37:57.704053  483106 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 21:37:57.704059  483106 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 21:37:57.704066  483106 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 21:37:57.704073  483106 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 21:37:57.704077  483106 command_runner.go:130] > # shared_cpuset = ""
	I1202 21:37:57.704088  483106 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 21:37:57.704094  483106 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 21:37:57.704098  483106 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 21:37:57.704111  483106 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 21:37:57.704115  483106 command_runner.go:130] > # pinns_path = ""
	I1202 21:37:57.704126  483106 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 21:37:57.704133  483106 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 21:37:57.704159  483106 command_runner.go:130] > # enable_criu_support = true
	I1202 21:37:57.704170  483106 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 21:37:57.704177  483106 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 21:37:57.704281  483106 command_runner.go:130] > # enable_pod_events = false
	I1202 21:37:57.704302  483106 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 21:37:57.704308  483106 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 21:37:57.704428  483106 command_runner.go:130] > # default_runtime = "crun"
	I1202 21:37:57.704441  483106 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 21:37:57.704455  483106 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 21:37:57.704470  483106 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 21:37:57.704476  483106 command_runner.go:130] > # creation as a file is not desired either.
	I1202 21:37:57.704485  483106 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 21:37:57.704501  483106 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 21:37:57.704506  483106 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 21:37:57.704638  483106 command_runner.go:130] > # ]
	I1202 21:37:57.704649  483106 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 21:37:57.704656  483106 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 21:37:57.704663  483106 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 21:37:57.704668  483106 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 21:37:57.704671  483106 command_runner.go:130] > #
	I1202 21:37:57.704676  483106 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 21:37:57.704681  483106 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 21:37:57.704688  483106 command_runner.go:130] > # runtime_type = "oci"
	I1202 21:37:57.704693  483106 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 21:37:57.704697  483106 command_runner.go:130] > # inherit_default_runtime = false
	I1202 21:37:57.704710  483106 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 21:37:57.704715  483106 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 21:37:57.704720  483106 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 21:37:57.704728  483106 command_runner.go:130] > # monitor_env = []
	I1202 21:37:57.704733  483106 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 21:37:57.704737  483106 command_runner.go:130] > # allowed_annotations = []
	I1202 21:37:57.704743  483106 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 21:37:57.704749  483106 command_runner.go:130] > # no_sync_log = false
	I1202 21:37:57.704753  483106 command_runner.go:130] > # default_annotations = {}
	I1202 21:37:57.704757  483106 command_runner.go:130] > # stream_websockets = false
	I1202 21:37:57.704761  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.704791  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.704803  483106 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 21:37:57.704810  483106 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 21:37:57.704816  483106 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 21:37:57.704822  483106 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 21:37:57.704828  483106 command_runner.go:130] > #   in $PATH.
	I1202 21:37:57.704835  483106 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 21:37:57.704844  483106 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 21:37:57.704850  483106 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 21:37:57.704853  483106 command_runner.go:130] > #   state.
	I1202 21:37:57.704859  483106 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 21:37:57.704870  483106 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 21:37:57.704879  483106 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 21:37:57.704885  483106 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 21:37:57.704891  483106 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 21:37:57.704899  483106 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 21:37:57.704907  483106 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 21:37:57.704917  483106 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 21:37:57.704923  483106 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 21:37:57.704931  483106 command_runner.go:130] > #   The currently recognized values are:
	I1202 21:37:57.704940  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 21:37:57.704947  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 21:37:57.704954  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 21:37:57.704962  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 21:37:57.704969  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 21:37:57.704978  483106 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 21:37:57.704985  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 21:37:57.704992  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 21:37:57.705001  483106 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 21:37:57.705008  483106 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 21:37:57.705017  483106 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 21:37:57.705023  483106 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 21:37:57.705029  483106 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 21:37:57.705035  483106 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 21:37:57.705045  483106 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 21:37:57.705054  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 21:37:57.705068  483106 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 21:37:57.705072  483106 command_runner.go:130] > #   deprecated option "conmon".
	I1202 21:37:57.705080  483106 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 21:37:57.705088  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 21:37:57.705095  483106 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 21:37:57.705101  483106 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 21:37:57.705108  483106 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 21:37:57.705113  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 21:37:57.705129  483106 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 21:37:57.705135  483106 command_runner.go:130] > #   conmon-rs by using:
	I1202 21:37:57.705143  483106 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 21:37:57.705154  483106 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 21:37:57.705165  483106 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 21:37:57.705176  483106 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 21:37:57.705183  483106 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 21:37:57.705191  483106 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 21:37:57.705198  483106 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 21:37:57.705203  483106 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 21:37:57.705214  483106 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 21:37:57.705222  483106 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 21:37:57.705228  483106 command_runner.go:130] > #   when a machine crash happens.
	I1202 21:37:57.705235  483106 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 21:37:57.705243  483106 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 21:37:57.705253  483106 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 21:37:57.705257  483106 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 21:37:57.705263  483106 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 21:37:57.705273  483106 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 21:37:57.705275  483106 command_runner.go:130] > #
	I1202 21:37:57.705280  483106 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 21:37:57.705285  483106 command_runner.go:130] > #
	I1202 21:37:57.705292  483106 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 21:37:57.705301  483106 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 21:37:57.705304  483106 command_runner.go:130] > #
	I1202 21:37:57.705310  483106 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 21:37:57.705317  483106 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 21:37:57.705322  483106 command_runner.go:130] > #
	I1202 21:37:57.705328  483106 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 21:37:57.705331  483106 command_runner.go:130] > # feature.
	I1202 21:37:57.705336  483106 command_runner.go:130] > #
	I1202 21:37:57.705342  483106 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 21:37:57.705350  483106 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 21:37:57.705360  483106 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 21:37:57.705367  483106 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 21:37:57.705375  483106 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 21:37:57.705382  483106 command_runner.go:130] > #
	I1202 21:37:57.705388  483106 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 21:37:57.705397  483106 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 21:37:57.705399  483106 command_runner.go:130] > #
	I1202 21:37:57.705405  483106 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 21:37:57.705411  483106 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 21:37:57.705416  483106 command_runner.go:130] > #
	I1202 21:37:57.705422  483106 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 21:37:57.705428  483106 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 21:37:57.705433  483106 command_runner.go:130] > # limitation.
	I1202 21:37:57.705469  483106 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 21:37:57.705480  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 21:37:57.705484  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705488  483106 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 21:37:57.705492  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705499  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705503  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705510  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705514  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705518  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705521  483106 command_runner.go:130] > allowed_annotations = [
	I1202 21:37:57.705734  483106 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 21:37:57.705745  483106 command_runner.go:130] > ]
	I1202 21:37:57.705770  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705779  483106 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 21:37:57.705849  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 21:37:57.705872  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705883  483106 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 21:37:57.705901  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705906  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705910  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705915  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705921  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705925  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705929  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705937  483106 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 21:37:57.705944  483106 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 21:37:57.705965  483106 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 21:37:57.705974  483106 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 21:37:57.705985  483106 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 21:37:57.706000  483106 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 21:37:57.706009  483106 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 21:37:57.706015  483106 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 21:37:57.706025  483106 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 21:37:57.706051  483106 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 21:37:57.706057  483106 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 21:37:57.706077  483106 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 21:37:57.706082  483106 command_runner.go:130] > # Example:
	I1202 21:37:57.706087  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 21:37:57.706091  483106 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 21:37:57.706096  483106 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 21:37:57.706102  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 21:37:57.706105  483106 command_runner.go:130] > # cpuset = "0-1"
	I1202 21:37:57.706108  483106 command_runner.go:130] > # cpushares = "5"
	I1202 21:37:57.706112  483106 command_runner.go:130] > # cpuquota = "1000"
	I1202 21:37:57.706116  483106 command_runner.go:130] > # cpuperiod = "100000"
	I1202 21:37:57.706120  483106 command_runner.go:130] > # cpulimit = "35"
	I1202 21:37:57.706126  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.706131  483106 command_runner.go:130] > # The workload name is workload-type.
	I1202 21:37:57.706143  483106 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 21:37:57.706160  483106 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 21:37:57.706180  483106 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 21:37:57.706189  483106 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 21:37:57.706195  483106 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 21:37:57.706229  483106 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 21:37:57.706243  483106 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 21:37:57.706247  483106 command_runner.go:130] > # Default value is set to true
	I1202 21:37:57.706253  483106 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 21:37:57.706261  483106 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 21:37:57.706266  483106 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 21:37:57.706271  483106 command_runner.go:130] > # Default value is set to 'false'
	I1202 21:37:57.706275  483106 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 21:37:57.706280  483106 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 21:37:57.706291  483106 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 21:37:57.706299  483106 command_runner.go:130] > # timezone = ""
	I1202 21:37:57.706306  483106 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 21:37:57.706308  483106 command_runner.go:130] > #
	I1202 21:37:57.706315  483106 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 21:37:57.706326  483106 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 21:37:57.706329  483106 command_runner.go:130] > [crio.image]
	I1202 21:37:57.706338  483106 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 21:37:57.706348  483106 command_runner.go:130] > # default_transport = "docker://"
	I1202 21:37:57.706354  483106 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 21:37:57.706360  483106 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706497  483106 command_runner.go:130] > # global_auth_file = ""
	I1202 21:37:57.706512  483106 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 21:37:57.706518  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706617  483106 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.706659  483106 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 21:37:57.706671  483106 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706677  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706682  483106 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 21:37:57.706688  483106 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 21:37:57.706698  483106 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 21:37:57.706714  483106 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 21:37:57.706730  483106 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 21:37:57.706734  483106 command_runner.go:130] > # pause_command = "/pause"
	I1202 21:37:57.706749  483106 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 21:37:57.706756  483106 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 21:37:57.706771  483106 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 21:37:57.706777  483106 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 21:37:57.706783  483106 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 21:37:57.706791  483106 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 21:37:57.706795  483106 command_runner.go:130] > # pinned_images = [
	I1202 21:37:57.706798  483106 command_runner.go:130] > # ]
	I1202 21:37:57.706806  483106 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 21:37:57.706813  483106 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 21:37:57.706822  483106 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 21:37:57.706828  483106 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 21:37:57.706834  483106 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 21:37:57.707022  483106 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 21:37:57.707046  483106 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 21:37:57.707056  483106 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 21:37:57.707066  483106 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 21:37:57.707073  483106 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 21:37:57.707084  483106 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 21:37:57.707105  483106 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 21:37:57.707129  483106 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 21:37:57.707141  483106 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 21:37:57.707146  483106 command_runner.go:130] > # changing them here.
	I1202 21:37:57.707158  483106 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 21:37:57.707163  483106 command_runner.go:130] > # insecure_registries = [
	I1202 21:37:57.707278  483106 command_runner.go:130] > # ]
	I1202 21:37:57.707303  483106 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 21:37:57.707309  483106 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 21:37:57.707323  483106 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 21:37:57.707334  483106 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 21:37:57.707518  483106 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 21:37:57.707543  483106 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 21:37:57.707551  483106 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 21:37:57.707565  483106 command_runner.go:130] > # auto_reload_registries = false
	I1202 21:37:57.707577  483106 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 21:37:57.707586  483106 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 21:37:57.707593  483106 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 21:37:57.707601  483106 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 21:37:57.707626  483106 command_runner.go:130] > # The mode of short name resolution.
	I1202 21:37:57.707639  483106 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 21:37:57.707646  483106 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 21:37:57.707652  483106 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 21:37:57.707737  483106 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 21:37:57.707776  483106 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 21:37:57.707797  483106 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 21:37:57.707804  483106 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 21:37:57.707810  483106 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 21:37:57.707814  483106 command_runner.go:130] > # CNI plugins.
	I1202 21:37:57.707818  483106 command_runner.go:130] > [crio.network]
	I1202 21:37:57.707825  483106 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 21:37:57.707834  483106 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 21:37:57.707838  483106 command_runner.go:130] > # cni_default_network = ""
	I1202 21:37:57.707843  483106 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 21:37:57.707880  483106 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 21:37:57.707894  483106 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 21:37:57.707898  483106 command_runner.go:130] > # plugin_dirs = [
	I1202 21:37:57.708100  483106 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 21:37:57.708328  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708337  483106 command_runner.go:130] > # List of included pod metrics.
	I1202 21:37:57.708504  483106 command_runner.go:130] > # included_pod_metrics = [
	I1202 21:37:57.708692  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708716  483106 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 21:37:57.708721  483106 command_runner.go:130] > [crio.metrics]
	I1202 21:37:57.708725  483106 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 21:37:57.709042  483106 command_runner.go:130] > # enable_metrics = false
	I1202 21:37:57.709050  483106 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 21:37:57.709056  483106 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 21:37:57.709063  483106 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 21:37:57.709070  483106 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 21:37:57.709082  483106 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 21:37:57.709226  483106 command_runner.go:130] > # metrics_collectors = [
	I1202 21:37:57.709424  483106 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 21:37:57.709616  483106 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 21:37:57.709807  483106 command_runner.go:130] > # 	"containers_oom_total",
	I1202 21:37:57.709999  483106 command_runner.go:130] > # 	"processes_defunct",
	I1202 21:37:57.710186  483106 command_runner.go:130] > # 	"operations_total",
	I1202 21:37:57.710377  483106 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 21:37:57.710569  483106 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 21:37:57.710759  483106 command_runner.go:130] > # 	"operations_errors_total",
	I1202 21:37:57.710953  483106 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 21:37:57.711154  483106 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 21:37:57.711347  483106 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 21:37:57.711541  483106 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 21:37:57.711734  483106 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 21:37:57.711929  483106 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 21:37:57.712114  483106 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 21:37:57.712326  483106 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 21:37:57.712521  483106 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 21:37:57.712708  483106 command_runner.go:130] > # ]
	I1202 21:37:57.712718  483106 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 21:37:57.713101  483106 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 21:37:57.713111  483106 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 21:37:57.713462  483106 command_runner.go:130] > # metrics_port = 9090
	I1202 21:37:57.713472  483106 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 21:37:57.713766  483106 command_runner.go:130] > # metrics_socket = ""
	I1202 21:37:57.713798  483106 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 21:37:57.713843  483106 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 21:37:57.713867  483106 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 21:37:57.713890  483106 command_runner.go:130] > # certificate on any modification event.
	I1202 21:37:57.714026  483106 command_runner.go:130] > # metrics_cert = ""
	I1202 21:37:57.714049  483106 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 21:37:57.714055  483106 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 21:37:57.714333  483106 command_runner.go:130] > # metrics_key = ""
	I1202 21:37:57.714367  483106 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 21:37:57.714411  483106 command_runner.go:130] > [crio.tracing]
	I1202 21:37:57.714434  483106 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 21:37:57.714690  483106 command_runner.go:130] > # enable_tracing = false
	I1202 21:37:57.714730  483106 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 21:37:57.715040  483106 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 21:37:57.715074  483106 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 21:37:57.715400  483106 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 21:37:57.715424  483106 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 21:37:57.715465  483106 command_runner.go:130] > [crio.nri]
	I1202 21:37:57.715486  483106 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 21:37:57.715706  483106 command_runner.go:130] > # enable_nri = true
	I1202 21:37:57.715731  483106 command_runner.go:130] > # NRI socket to listen on.
	I1202 21:37:57.716042  483106 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 21:37:57.716072  483106 command_runner.go:130] > # NRI plugin directory to use.
	I1202 21:37:57.716381  483106 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 21:37:57.716412  483106 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 21:37:57.716702  483106 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 21:37:57.716734  483106 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 21:37:57.716910  483106 command_runner.go:130] > # nri_disable_connections = false
	I1202 21:37:57.716983  483106 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 21:37:57.717007  483106 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 21:37:57.717025  483106 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 21:37:57.717040  483106 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 21:37:57.717084  483106 command_runner.go:130] > # NRI default validator configuration.
	I1202 21:37:57.717109  483106 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 21:37:57.717127  483106 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 21:37:57.717180  483106 command_runner.go:130] > # can be restricted/rejected:
	I1202 21:37:57.717207  483106 command_runner.go:130] > # - OCI hook injection
	I1202 21:37:57.717238  483106 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 21:37:57.717387  483106 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 21:37:57.717408  483106 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 21:37:57.717448  483106 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 21:37:57.717469  483106 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 21:37:57.717489  483106 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 21:37:57.717520  483106 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 21:37:57.717542  483106 command_runner.go:130] > #
	I1202 21:37:57.717559  483106 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 21:37:57.717588  483106 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 21:37:57.717614  483106 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 21:37:57.717634  483106 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 21:37:57.717673  483106 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 21:37:57.717700  483106 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 21:37:57.717721  483106 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 21:37:57.717750  483106 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 21:37:57.717775  483106 command_runner.go:130] > # ]
	I1202 21:37:57.717791  483106 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 21:37:57.717809  483106 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 21:37:57.717844  483106 command_runner.go:130] > [crio.stats]
	I1202 21:37:57.717862  483106 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 21:37:57.717880  483106 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 21:37:57.717896  483106 command_runner.go:130] > # stats_collection_period = 0
	I1202 21:37:57.717933  483106 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 21:37:57.717955  483106 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 21:37:57.717969  483106 command_runner.go:130] > # collection_period = 0
	I1202 21:37:57.719581  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.679996811Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 21:37:57.719602  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680035195Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 21:37:57.719612  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680068245Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 21:37:57.719634  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680094978Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 21:37:57.719650  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680175192Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.719661  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680551245Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 21:37:57.719673  483106 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 21:37:57.719793  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:57.719806  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:57.719822  483106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:37:57.719854  483106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:37:57.719977  483106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:37:57.720050  483106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:37:57.727128  483106 command_runner.go:130] > kubeadm
	I1202 21:37:57.727200  483106 command_runner.go:130] > kubectl
	I1202 21:37:57.727217  483106 command_runner.go:130] > kubelet
	I1202 21:37:57.727679  483106 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:37:57.727758  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:37:57.735128  483106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:37:57.747401  483106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:37:57.759635  483106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 21:37:57.772168  483106 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:37:57.775704  483106 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:37:57.775781  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.892482  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:58.414394  483106 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:37:58.414415  483106 certs.go:195] generating shared ca certs ...
	I1202 21:37:58.414431  483106 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:58.414617  483106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:37:58.414690  483106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:37:58.414702  483106 certs.go:257] generating profile certs ...
	I1202 21:37:58.414822  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:37:58.414884  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:37:58.414927  483106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:37:58.414939  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:37:58.414953  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:37:58.414964  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:37:58.414980  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:37:58.414991  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:37:58.415019  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:37:58.415030  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:37:58.415042  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:37:58.415094  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:37:58.415127  483106 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:37:58.415140  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:37:58.415171  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:37:58.415199  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:37:58.415223  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:37:58.415279  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:58.415327  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.415344  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem -> /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.415358  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.415948  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:37:58.434575  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:37:58.454217  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:37:58.476636  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:37:58.499852  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:37:58.517799  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:37:58.537626  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:37:58.556051  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:37:58.573621  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:37:58.591561  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:37:58.609240  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:37:58.626214  483106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:37:58.638898  483106 ssh_runner.go:195] Run: openssl version
	I1202 21:37:58.644941  483106 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:37:58.645379  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:37:58.653758  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657242  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657279  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657350  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.697450  483106 command_runner.go:130] > b5213941
	I1202 21:37:58.697880  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:37:58.705830  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:37:58.714550  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718238  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718320  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718390  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.760939  483106 command_runner.go:130] > 51391683
	I1202 21:37:58.761409  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:37:58.769112  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:37:58.777300  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780878  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780914  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780988  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.821311  483106 command_runner.go:130] > 3ec20f2e
	I1202 21:37:58.821773  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:37:58.829482  483106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833099  483106 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833249  483106 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:37:58.833277  483106 command_runner.go:130] > Device: 259,1	Inode: 1309045     Links: 1
	I1202 21:37:58.833296  483106 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:58.833318  483106 command_runner.go:130] > Access: 2025-12-02 21:33:51.106313964 +0000
	I1202 21:37:58.833335  483106 command_runner.go:130] > Modify: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833354  483106 command_runner.go:130] > Change: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833368  483106 command_runner.go:130] >  Birth: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833452  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:37:58.873701  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.874162  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:37:58.914810  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.915281  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:37:58.957479  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.957884  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:37:58.998366  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.998755  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:37:59.041919  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.042032  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:37:59.082406  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.082849  483106 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:59.082947  483106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:37:59.083063  483106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:37:59.109816  483106 cri.go:89] found id: ""
	I1202 21:37:59.109903  483106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:37:59.116871  483106 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:37:59.116937  483106 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:37:59.116958  483106 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:37:59.117791  483106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:37:59.117835  483106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:37:59.117913  483106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:37:59.125060  483106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:37:59.125506  483106 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.125617  483106 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-444114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-066896" cluster setting kubeconfig missing "functional-066896" context setting]
	I1202 21:37:59.125900  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.126337  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.126509  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.127095  483106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:37:59.127116  483106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:37:59.127122  483106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:37:59.127127  483106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:37:59.127133  483106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:37:59.127170  483106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:37:59.127484  483106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:37:59.134957  483106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:37:59.134991  483106 kubeadm.go:602] duration metric: took 17.137902ms to restartPrimaryControlPlane
	I1202 21:37:59.135014  483106 kubeadm.go:403] duration metric: took 52.172876ms to StartCluster
	I1202 21:37:59.135029  483106 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135086  483106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.135727  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135915  483106 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:37:59.136175  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:59.136232  483106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:37:59.136325  483106 addons.go:70] Setting storage-provisioner=true in profile "functional-066896"
	I1202 21:37:59.136339  483106 addons.go:239] Setting addon storage-provisioner=true in "functional-066896"
	I1202 21:37:59.136375  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.136437  483106 addons.go:70] Setting default-storageclass=true in profile "functional-066896"
	I1202 21:37:59.136458  483106 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-066896"
	I1202 21:37:59.136761  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.136798  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.139277  483106 out.go:179] * Verifying Kubernetes components...
	I1202 21:37:59.140771  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:59.165976  483106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:37:59.168845  483106 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.168870  483106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:37:59.168937  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.175656  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.176018  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.176385  483106 addons.go:239] Setting addon default-storageclass=true in "functional-066896"
	I1202 21:37:59.176428  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.176909  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.211203  483106 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:37:59.211229  483106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:37:59.211311  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.225207  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.248989  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.349954  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:59.407494  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.408663  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.165713  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165766  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165797  483106 retry.go:31] will retry after 202.822033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165873  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165889  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165899  483106 retry.go:31] will retry after 281.773783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.166009  483106 node_ready.go:35] waiting up to 6m0s for node "functional-066896" to be "Ready" ...
	I1202 21:38:00.166135  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.166200  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.368900  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.441989  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.442041  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.442063  483106 retry.go:31] will retry after 393.334545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.448331  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.512520  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.512571  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.512592  483106 retry.go:31] will retry after 493.57139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.666814  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.667270  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.835693  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.896509  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.896567  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.896588  483106 retry.go:31] will retry after 517.359335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.006926  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.069882  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.069952  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.069980  483106 retry.go:31] will retry after 823.867865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.167068  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.167622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.415018  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:01.473591  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.473646  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.473665  483106 retry.go:31] will retry after 817.290744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.666990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.667103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.894929  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.964144  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.967581  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.967615  483106 retry.go:31] will retry after 586.961084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.167465  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:02.167512  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:02.292000  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:02.348780  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.352211  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.352246  483106 retry.go:31] will retry after 1.098539896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.555610  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:02.616881  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.616985  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.617011  483106 retry.go:31] will retry after 1.090026315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.667191  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.667272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.667575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.451026  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:03.515404  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.515439  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.515458  483106 retry.go:31] will retry after 2.58724354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.666944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.667328  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.707632  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:03.776872  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.776924  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.776953  483106 retry.go:31] will retry after 972.290717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.166626  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.166706  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:04.666777  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.666867  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.667243  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:04.667303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:04.749460  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:04.810694  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:04.810734  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.810752  483106 retry.go:31] will retry after 3.951899284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:05.166161  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.166235  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.166558  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:05.666140  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.666212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.102988  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:06.161220  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:06.161263  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.161284  483106 retry.go:31] will retry after 3.838527337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.166366  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.166444  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.666314  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.666386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:07.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.166299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:07.166671  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:07.666338  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.666425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.666777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.166503  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.166606  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.166933  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.666295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.666603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.763053  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:08.821648  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:08.821701  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:08.821721  483106 retry.go:31] will retry after 4.430309202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:09.166538  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.166615  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.166964  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:09.167037  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:09.666806  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.666904  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.667263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.001423  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:10.065960  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:10.069561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.069595  483106 retry.go:31] will retry after 4.835447081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.166750  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.166827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.167127  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.666978  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.667076  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.667385  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:11.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.167266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.167557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:11.167608  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:11.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.666586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.166242  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.167025  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.167092  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.167359  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.252779  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:13.311539  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:13.314561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.314593  483106 retry.go:31] will retry after 7.77807994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.667097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.667178  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.667555  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:13.667614  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:14.166435  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.166532  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.166857  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.666157  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.666230  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.906038  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:14.963486  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:14.966545  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:14.966583  483106 retry.go:31] will retry after 9.105443561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:15.166926  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.167018  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.167368  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:15.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.666221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.666564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:16.166892  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.167321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:16.167385  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:16.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.667311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.667666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.166271  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.166345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.166811  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.666246  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.666576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.166665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.666398  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.666474  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.666809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:18.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:19.167020  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.167103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.167423  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:19.666169  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.666247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.166216  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.166296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.166641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.666328  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.666687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:21.093408  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:21.149979  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:21.153644  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.153677  483106 retry.go:31] will retry after 11.903983297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.166790  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.167199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:21.167253  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:21.666923  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.667013  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.667352  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.166588  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.166661  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.166957  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.666921  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.667250  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:23.167035  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.167114  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.167459  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:23.167514  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:23.666741  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.666815  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.072876  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:24.134664  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:24.134721  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.134742  483106 retry.go:31] will retry after 11.08333461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.166922  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.166990  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.167311  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.667038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.667366  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.167335  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.667220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.667299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.667607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:25.667651  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:26.166305  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.166387  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.166780  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:26.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.666584  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.166286  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.166358  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.666223  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.666297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.666627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:28.166866  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.166938  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:28.167314  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:28.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.667185  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.166534  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.166912  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.666220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.666294  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.666610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.166321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.166409  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:30.666751  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:31.166158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.166232  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.166500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:31.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.666300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.166362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.666462  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:32.666785  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:33.058732  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:33.133401  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:33.133437  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.133456  483106 retry.go:31] will retry after 7.836153133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.166617  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.167044  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:33.666857  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.666928  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.166841  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.166919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.666992  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.667107  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.667433  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:34.667486  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:35.166145  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.166224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.166561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:35.218798  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:35.277107  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:35.277160  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.277179  483106 retry.go:31] will retry after 18.212486347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.666236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.666575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.166236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.166653  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.666345  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.666418  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.666776  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:37.166874  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.166942  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.167236  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:37.167279  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:37.667058  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.667144  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.167192  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.167270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.167629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.166835  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.166911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.167230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.667062  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.667449  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:39.667503  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:40.166787  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.666946  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.667046  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.667374  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.969813  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:41.027522  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:41.030695  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.030727  483106 retry.go:31] will retry after 26.445141412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.167017  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.167086  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.167412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:41.667158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.667226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.667538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:41.667593  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:42.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:42.666412  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.666487  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.666864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.167082  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.167382  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.667222  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.667290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.667605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:43.667663  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:44.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.167048  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:44.666563  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.666635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.666906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.166291  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.666557  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.666637  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.666980  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:46.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.166248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.166526  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:46.166568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:46.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.666372  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.166454  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.166529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.166849  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.667114  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.667196  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.667500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:48.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.166278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.166598  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:48.166644  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:48.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.166918  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.166985  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.167265  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.667124  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:50.167148  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:50.167600  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:50.666859  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.666941  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.667348  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.166149  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.666321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.666742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.167091  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.167502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.666630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:52.666682  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:53.166365  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.166440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.166743  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:53.490393  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:53.549126  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:53.552379  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.552413  483106 retry.go:31] will retry after 28.270272942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.666480  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.166899  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.166977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.167310  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.667106  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.667183  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.667452  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:54.667501  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:55.166711  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.166784  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.167096  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:55.666915  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.666986  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.667321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.167141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.167212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.167527  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:57.166258  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:57.166735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:57.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.167097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.167360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.667123  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.667203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.667560  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:59.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.166930  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:59.166985  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:59.666233  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.666305  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.166345  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.166424  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.166735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.666605  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.666696  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.667071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:01.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.166920  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.167258  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:01.167303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:01.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.667514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.166308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.666901  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.666977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.667267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:03.167047  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.167126  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.167463  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:03.167519  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:03.667138  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.667208  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.667536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.166363  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.166711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.666264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.666699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.166480  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.166807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.666607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:05.666654  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:06.166221  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:06.666253  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.666658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.166933  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.167016  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.167275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.476950  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:07.537734  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:07.540988  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.541021  483106 retry.go:31] will retry after 43.142584555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.666246  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:07.666721  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:08.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.166806  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:08.666497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.666831  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.167081  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.167424  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.666233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:10.166170  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.166240  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.166510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:10.166560  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:10.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.166219  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.166624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.666147  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.666484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:12.166223  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.166293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.166617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:12.166680  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:12.666258  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.167106  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.167177  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.167479  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.666184  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.666262  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:14.166400  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.166473  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:14.166879  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:14.666975  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.667061  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.667380  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.167173  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.167254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.167549  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.666659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.166211  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.166592  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.666667  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:17.166399  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.166790  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:17.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.166356  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.166694  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.666433  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.666858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:18.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:19.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.167267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:19.667092  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.667166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.667486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.166275  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.166627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.666762  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.666831  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.667148  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:20.667207  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:21.166923  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.167030  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.167353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.667178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.667576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.822959  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:39:21.878670  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878722  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878822  483106 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:22.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.167188  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:22.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.666649  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:23.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.166385  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.166692  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:23.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:23.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.166668  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.166744  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.167080  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.666918  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.667347  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:25.166732  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.166798  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.167094  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:25.167141  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:25.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.167051  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.167153  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.167485  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.666270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.166562  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.666353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:27.666775  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:28.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.166268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:28.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.666620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.166566  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.166638  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.166966  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.666571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:30.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:30.166748  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:30.666468  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.666548  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.666896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.166188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.166269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.166537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:32.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.166483  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.166797  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:32.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:32.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.666570  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.166360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.666426  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.666501  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.666838  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:34.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.166641  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.166906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:34.166954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:34.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.166396  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.667133  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.667396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:36.167160  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.167234  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.167571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:36.167629  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:36.666296  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.666373  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.167008  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.167074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.167365  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.667188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.667263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.667557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.166608  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.666617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:38.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:39.166799  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.166866  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.167214  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:39.666873  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.666945  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.166544  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.666645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:40.666705  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:41.166392  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.166467  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:41.667109  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.667193  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.667456  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.166704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.666430  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.666507  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.666850  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:42.666912  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:43.166126  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.166198  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.166502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:43.666218  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.166582  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.166676  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.167019  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.666769  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.666837  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:44.667165  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:45.167137  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.167219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.167616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:45.666336  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.666407  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.666753  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.166918  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.666991  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.667084  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.667426  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:46.667487  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:47.166178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.166572  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:47.666176  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.666257  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.666519  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.166237  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.166320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.666685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:49.166761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.167141  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:49.167190  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:49.667042  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.667119  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.667437  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.166247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.666738  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.666823  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.667106  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.684445  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:50.752913  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.752959  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.753053  483106 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:50.754872  483106 out.go:179] * Enabled addons: 
	I1202 21:39:50.756298  483106 addons.go:530] duration metric: took 1m51.620061888s for enable addons: enabled=[]
	I1202 21:39:51.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.166426  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.166756  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:51.666472  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.666542  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:51.666948  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:52.167023  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.167094  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:52.666886  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.666958  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.667302  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.167134  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.167525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.667191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.667443  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:53.667482  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:54.166580  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.166653  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:54.666761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.666832  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.667157  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.166643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:56.166424  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.166496  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:56.166886  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:56.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.166233  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.166658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.666359  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.666436  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.666730  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.166410  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.166495  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.166819  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.666570  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.666669  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:58.667176  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:59.166497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.166577  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:59.667069  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.667455  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.166967  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.666805  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.666883  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.667412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:00.667479  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:01.166600  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.166671  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.167071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:01.666865  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.666943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.667324  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.167126  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.167206  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.167585  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.666525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:03.166226  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.166298  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.166603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:03.166657  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:03.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.166563  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:05.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.166503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.166802  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:05.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:05.666560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.666632  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.666917  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.166784  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.166862  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.167188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.666980  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.667073  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.667410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:07.167168  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.167242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.167577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:07.167637  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:07.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.166347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.666848  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.666917  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.667201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.167192  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.167533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:09.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:10.166218  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.166297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.166630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:10.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.166230  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.166652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.666139  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.666209  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:12.166254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:12.166731  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:12.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.166445  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.166702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:14.166684  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.166770  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.167156  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:14.167223  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:14.666896  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.667255  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.167098  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.167589  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.666317  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.666392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:16.166898  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.166964  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.167280  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:16.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:16.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.667212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.667594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.166183  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.666227  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.666643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.166363  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.166741  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.666473  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.666544  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:18.666946  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:19.166811  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.166894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.167197  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:19.667052  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.667494  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.166251  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.666278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.666536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:21.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:21.166718  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:21.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.166154  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.166236  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.166525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.666654  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:23.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.166350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.166696  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:23.166758  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:23.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.166423  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.166514  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.166938  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.666591  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.666926  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.166167  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.666268  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:25.666738  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:26.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.166386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.166758  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:26.667125  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.667482  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.166187  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.166601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.666179  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.666248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:28.166873  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.166943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.167276  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:28.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:28.667149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.667219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.667624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.166678  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.167031  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.666202  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.166296  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.166722  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.666438  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.666516  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.666818  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:30.666863  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:31.167130  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.167203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.167472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:31.666847  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.666919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.167093  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.167163  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.167483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.666708  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.666786  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.667188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:32.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:33.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.167053  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.167388  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:33.666150  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.666225  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.666552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.166580  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:35.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.166672  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:35.166733  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:35.667033  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.667102  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.167161  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.166552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.666698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:37.666757  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:38.166422  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.166500  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:38.666194  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.666265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.167095  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.666974  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.667318  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:39.667375  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:40.167120  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.167543  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:40.666231  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.166425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.166750  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.666605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:42.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.166619  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:42.167094  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:42.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.666923  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.167057  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.167134  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.167398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.667173  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.667599  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.166501  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.166575  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.166892  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.666149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.666222  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.666488  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:44.666529  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:45.166301  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.166394  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.166815  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:45.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.666688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.166383  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.166726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.666288  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.666390  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.666823  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:46.666883  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:47.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:47.666906  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.666980  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.667259  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.167539  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:49.166560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.166634  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:49.166951  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:49.666759  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.666827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.667195  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.167180  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.167561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.166662  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.666376  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.666454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.666782  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:51.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:52.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.166277  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:52.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.666260  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.166242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.166586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:54.166666  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.166740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.167107  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:54.167169  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:54.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.667066  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.667453  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.166768  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.166843  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.167212  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.667075  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.667147  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.166196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.666907  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.666978  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.667341  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:56.667400  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:57.167105  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.167182  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.167548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:57.666151  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.666224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.666574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:59.166616  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.166687  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.167061  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:59.167133  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:59.666436  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.666763  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.166433  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.166775  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.666772  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.666864  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.667256  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.166511  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.166588  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.166874  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.666242  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.666312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.666652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:01.666713  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:02.166240  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:02.666821  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.667219  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.167019  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.167098  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.167404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.667108  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.667179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.667509  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:03.667571  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:04.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.166539  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:04.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.666387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.666456  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:06.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:06.166736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:06.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.166352  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.166429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.166638  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:08.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:09.166897  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.167350  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:09.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.667559  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.166198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.166610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:11.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.166812  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:11.166864  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:11.667095  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.667159  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.167205  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.167279  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.167635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.666734  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.166554  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.666237  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:13.666743  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:14.166756  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.166839  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.167224  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:14.666384  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.666452  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.666765  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.166506  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.166604  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.666880  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.666953  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.667301  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:15.667360  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:16.167103  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.167186  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.167467  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:16.666185  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.666259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.666581  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.166400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.166698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.666368  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.666435  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.666759  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:18.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.166336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:18.166712  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:18.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.666316  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.166992  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.666925  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.667275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:20.167102  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.167179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.167552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:20.167610  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.166282  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.166361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.166713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.666428  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.666878  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.166118  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.166189  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.166472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.666186  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.666263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:22.666636  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:23.166387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:23.666524  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.666616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.666974  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.166861  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.166944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.167295  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.667130  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.667205  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.667569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:24.667625  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:25.166285  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.166367  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.166640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:25.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.166431  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.166504  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.166839  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.666268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:27.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:27.166741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:27.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.166370  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.166448  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.666614  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:29.166581  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.166988  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:29.167064  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:29.666310  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.666379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.166344  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.666407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.666837  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.166591  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.666262  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.666700  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:31.666773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:32.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.166666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:32.666931  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.667021  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.167169  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.666354  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:34.166448  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.166521  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.166778  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:34.166817  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:34.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.166518  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.166928  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.666213  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.666489  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.166173  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.166587  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.666706  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:36.666759  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:37.166409  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.166748  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:37.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.666371  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.166380  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.666156  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.666498  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:39.166531  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.166607  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.166922  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:39.166975  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:39.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.666360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.166383  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.166661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.666295  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.666709  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.166407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.166482  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.166800  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.666481  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.666552  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.666826  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:41.666867  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:42.166504  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.166597  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.167020  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:42.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.166575  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.166655  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.166923  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.666265  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:44.166680  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.166751  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.167102  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:44.167158  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:44.666373  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.666442  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.666712  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.166323  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.166419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.166904  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:46.166971  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.167358  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:46.167415  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:46.667180  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.667573  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.166353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.166671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.666144  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.666220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.166246  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.166328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.666285  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.666616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:48.666674  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:49.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.166829  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.167114  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:49.666912  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.667008  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.667343  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.167265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.167597  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.667199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:50.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:51.167085  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.167158  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.167484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:51.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.666588  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.166288  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.166576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.666279  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:53.166266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.166682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:53.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:53.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.666538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.166529  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.167128  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.666895  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.666973  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.667337  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:55.167110  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.167191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.167497  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:55.167547  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:55.666230  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.666304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.166312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.666335  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.666403  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.666666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.166298  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.166382  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.166769  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.666462  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.666534  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.666859  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:57.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:58.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:58.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.167410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.666199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.666594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:00.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.166428  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:00.166812  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:00.666486  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.666567  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.666939  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.166699  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.166771  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.167072  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.666854  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.666927  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.667287  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:02.166943  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.167041  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.167384  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:02.167439  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:02.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.667496  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.166171  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.166536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.666647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.166621  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.166972  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.666792  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.666871  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.667225  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:04.667298  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:05.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.167164  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:05.666753  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.666818  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.166887  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.167288  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.667059  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:06.667427  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:07.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.166958  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:07.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.666703  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.166359  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.666293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:09.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.166681  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:09.167077  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:09.666840  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.666912  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.667238  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.166509  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.166582  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.166858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.166742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.666945  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.667031  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.667356  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:11.667420  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:12.167101  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:12.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.666600  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.166981  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.167068  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.667199  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.667286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.667642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:13.667698  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:14.166489  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.166888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:14.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.666551  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.166321  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.166657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.666366  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.666440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.666760  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:16.166141  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.166215  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.166468  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:16.166510  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:16.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.166374  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.166457  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.166761  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.666442  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.666512  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.666821  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:18.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.166772  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:18.166836  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:18.666274  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.166855  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.166933  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.167216  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.667039  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.667360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:20.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.167228  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.167569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:20.167623  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:20.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.666348  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.666615  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.166625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.166193  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.166517  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.666240  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.666319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.666635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:22.666694  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:23.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.166714  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:23.666152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.666229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.166488  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:24.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:25.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.166557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:25.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.666628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.166354  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.166432  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.166768  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.666459  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.666527  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.666814  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:26.666855  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:27.166267  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.166728  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:27.666682  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.666756  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.667083  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.166832  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.166910  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.167202  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.667022  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.667097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:28.667472  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:29.166585  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.166986  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:29.666238  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.666306  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.166349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.166687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.666416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.667129  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:31.166416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.166493  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:31.166799  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:31.666451  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.666540  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.666886  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.166679  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.167040  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.166343  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.166414  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.666470  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.666546  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:33.666954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:34.166602  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.166668  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.166925  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:34.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.666642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.166346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.166669  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.666238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:36.166255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:36.166744  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:36.666417  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.666492  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.666845  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.166502  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.166593  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.166951  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.666782  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.666857  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.667204  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:38.167040  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.167135  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.167508  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:38.167570  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:38.666773  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.666845  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.167094  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.167166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.167513  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.667211  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.667304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.667685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.166206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:40.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:41.166208  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.166634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:41.666331  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.666404  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.166257  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:42.666736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:43.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.166460  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.166745  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:43.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.166962  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.666626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:45.166336  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.166423  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.166767  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:45.166816  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:45.666819  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.666897  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.667261  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.166500  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.166583  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.166847  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:47.166414  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.166497  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:47.166838  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:47.666485  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.666557  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.666832  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.166343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.666684  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:49.166554  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.166635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.166960  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:49.167054  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:49.666877  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.666951  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.167131  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.167578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.666932  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.667019  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.667326  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:51.167186  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.167276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.167691  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:51.167754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:51.666431  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.666825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.166160  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.166241  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.166511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.166466  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.166825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.667187  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.667483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:53.667539  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:54.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.166598  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.166946  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:54.666794  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.666869  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.166549  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.166809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.666671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:56.166359  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.166777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:56.166834  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:56.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.166224  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.166303  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.166628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.166503  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:58.666661  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:59.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.167155  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:59.666449  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.666515  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.166309  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.166395  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.666575  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.666682  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.667068  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:00.667126  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:01.166853  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.167038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.167371  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:01.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.667265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.667601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.166238  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.166322  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.666979  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.667074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.667353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:02.667401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:03.167145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.167221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.167567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.666326  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.666639  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.166767  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.667023  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.667100  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.667434  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:04.667488  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:05.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.166259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.166604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:05.666866  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.666932  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.167087  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.167170  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.167507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.666273  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.666702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:07.166389  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.166454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.166729  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:07.166773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:07.666440  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.666529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.666861  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.166628  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.166712  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.167093  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.666822  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.666890  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.667183  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:09.167074  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.167152  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.167512  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:09.167567  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:09.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.666352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.666710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.166961  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.167396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.666160  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.166637  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.666393  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.666463  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.666766  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:11.666808  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:12.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.166645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:12.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.666717  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.166302  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.166710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.666374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:14.166633  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.166711  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.167091  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:14.167149  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:14.666871  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.666946  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.667269  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.167061  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.167138  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.167476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.666203  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.666281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.666622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.166164  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.166245  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.166507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.666216  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:17.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.166577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:17.666191  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.666256  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.666511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.166212  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.166315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.166633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.666248  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:19.166505  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.166576  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.166870  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:19.166918  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:19.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.666567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.166357  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.666369  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.666443  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.666785  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:21.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:22.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.166561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.166824  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:22.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.666368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.166281  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.166368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.166699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.666210  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.666283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:24.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.166660  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.167035  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:24.167111  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:24.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.667230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.166928  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.167024  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.667147  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.667223  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.667622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.166295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.666243  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.666504  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:26.666554  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:27.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.166660  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:27.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.166197  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.166524  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.666680  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:28.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:29.166765  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.166840  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.167165  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:29.666897  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.167174  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.167271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.167625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.666334  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.666419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.666807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:30.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:31.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.167536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:31.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.166351  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.666287  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.666548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:33.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:33.166706  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:33.666243  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.166799  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.666282  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.666375  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.666726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.166319  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.166392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:35.666568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:36.166250  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.166626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:36.666324  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.666401  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.666725  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.166908  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.166975  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.667118  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.667398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:37.667447  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:38.166151  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.166226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.166528  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:38.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.666633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.166754  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.167075  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.666637  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.666714  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.667049  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:40.166341  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.166420  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.166681  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:40.166728  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:40.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.666455  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.666787  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.666356  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.666429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:42.166327  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.166411  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.166822  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:42.166896  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:42.666589  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.666665  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.667015  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.166747  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.166812  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.167088  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.666863  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.666934  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.667289  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:44.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.166981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.167339  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:44.167397  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:44.666667  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.666740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.667046  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.166921  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.167029  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.666175  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.666253  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.666621  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.166254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.166514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:46.666754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:47.166451  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.166864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:47.667182  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.667255  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.667579  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.666341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:49.166748  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.166817  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:49.167250  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:49.666922  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.667010  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.166155  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.666900  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.667180  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:51.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.167345  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:51.167391  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:51.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.667233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.667577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.166264  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.666171  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.666249  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.666529  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:53.666576  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:54.166567  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.166645  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:54.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.667510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.166542  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:55.666707  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.166311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.166642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:56.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.666282  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.167073  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.167151  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.167546  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.666340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:57.666741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:58.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:58.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.666328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.666634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:44:00.169272  483106 type.go:168] "Request Body" body=""
	W1202 21:44:00.169401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 21:44:00.169464  483106 node_ready.go:38] duration metric: took 6m0.003439328s for node "functional-066896" to be "Ready" ...
	I1202 21:44:00.175124  483106 out.go:203] 
	W1202 21:44:00.178380  483106 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:44:00.178413  483106 out.go:285] * 
	W1202 21:44:00.180645  483106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:44:00.185151  483106 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436081037Z" level=info msg="Using the internal default seccomp profile"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436142986Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436202104Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436256571Z" level=info msg="RDT not available in the host system"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436314524Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437068927Z" level=info msg="Conmon does support the --sync option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437154245Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437215505Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437845682Z" level=info msg="Conmon does support the --sync option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437934142Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.4381156Z" level=info msg="Updated default CNI network name to "
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.438813142Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.439566183Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.439720425Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.47605174Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476242413Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476306643Z" level=info msg="Create NRI interface"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476428245Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476442554Z" level=info msg="runtime interface created"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476456298Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476463396Z" level=info msg="runtime interface starting up..."
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476469705Z" level=info msg="starting plugins..."
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.47648285Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476550781Z" level=info msg="No systemd watchdog enabled"
	Dec 02 21:37:57 functional-066896 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:44:02.394725    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:02.395281    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:02.396790    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:02.397258    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:02.398768    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:44:02 up  3:26,  0 user,  load average: 0.37, 0.25, 0.50
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:43:59 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:00 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 02 21:44:00 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:00 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:00 functional-066896 kubelet[9089]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:00 functional-066896 kubelet[9089]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:00 functional-066896 kubelet[9089]: E1202 21:44:00.555675    9089 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:00 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:00 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:01 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 02 21:44:01 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:01 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:01 functional-066896 kubelet[9108]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:01 functional-066896 kubelet[9108]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:01 functional-066896 kubelet[9108]: E1202 21:44:01.423293    9108 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:01 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:01 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:02 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 02 21:44:02 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:02 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:02 functional-066896 kubelet[9154]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:02 functional-066896 kubelet[9154]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:02 functional-066896 kubelet[9154]: E1202 21:44:02.227713    9154 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:02 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:02 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (329.068338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-066896 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-066896 get po -A: exit status 1 (55.029681ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-066896 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-066896 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-066896 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (329.332234ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 logs -n 25: (1.021286016s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount3 --alsologtostderr -v=1                                │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh            │ functional-218190 ssh findmnt -T /mount1                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh findmnt -T /mount2                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh findmnt -T /mount3                                                                                                          │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount          │ -p functional-218190 --kill=true                                                                                                                  │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service list                                                                                                                    │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service        │ functional-218190 service list -o json                                                                                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start          │ -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service --namespace=default --https --url hello-node                                                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-218190 --alsologtostderr -v=1                                                                                    │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service        │ functional-218190 service hello-node --url --format={{.IP}}                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service        │ functional-218190 service hello-node --url                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ update-context │ functional-218190 update-context --alsologtostderr -v=2                                                                                           │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format short --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh            │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image          │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image          │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete         │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start          │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start          │ -p functional-066896 --alsologtostderr -v=8                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:37 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:37:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:37:54.052280  483106 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:37:54.052518  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052549  483106 out.go:374] Setting ErrFile to fd 2...
	I1202 21:37:54.052570  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052830  483106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:37:54.053229  483106 out.go:368] Setting JSON to false
	I1202 21:37:54.054096  483106 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12002,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:37:54.054239  483106 start.go:143] virtualization:  
	I1202 21:37:54.055968  483106 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:37:54.057216  483106 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:37:54.057305  483106 notify.go:221] Checking for updates...
	I1202 21:37:54.059409  483106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:37:54.060390  483106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:54.061474  483106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:37:54.062609  483106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:37:54.063772  483106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:37:54.065317  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:54.065458  483106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:37:54.087852  483106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:37:54.087968  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.157300  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.14827719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.157407  483106 docker.go:319] overlay module found
	I1202 21:37:54.158855  483106 out.go:179] * Using the docker driver based on existing profile
	I1202 21:37:54.160356  483106 start.go:309] selected driver: docker
	I1202 21:37:54.160374  483106 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.160477  483106 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:37:54.160570  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.221500  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.212376823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.221914  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:54.221982  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:54.222036  483106 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.223816  483106 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:37:54.224907  483106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:37:54.226134  483106 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:37:54.227415  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:54.227490  483106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:37:54.247414  483106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:37:54.247439  483106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:37:54.295322  483106 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:37:54.500334  483106 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:37:54.500536  483106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:37:54.500574  483106 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500673  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:37:54.500684  483106 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.936µs
	I1202 21:37:54.500698  483106 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:37:54.500710  483106 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500741  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:37:54.500746  483106 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.194µs
	I1202 21:37:54.500752  483106 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500761  483106 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500788  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:37:54.500788  483106 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:37:54.500792  483106 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.492µs
	I1202 21:37:54.500799  483106 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500809  483106 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500816  483106 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500852  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:37:54.500856  483106 start.go:364] duration metric: took 26.462µs to acquireMachinesLock for "functional-066896"
	I1202 21:37:54.500858  483106 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.838µs
	I1202 21:37:54.500864  483106 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500869  483106 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:37:54.500875  483106 fix.go:54] fixHost starting: 
	I1202 21:37:54.500873  483106 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500901  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:37:54.500905  483106 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.15µs
	I1202 21:37:54.500919  483106 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500928  483106 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500951  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:37:54.500956  483106 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.833µs
	I1202 21:37:54.500961  483106 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:37:54.500970  483106 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500994  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:37:54.500998  483106 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.391µs
	I1202 21:37:54.501003  483106 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:37:54.501011  483106 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.501036  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:37:54.501040  483106 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.097µs
	I1202 21:37:54.501046  483106 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:37:54.501065  483106 cache.go:87] Successfully saved all images to host disk.
	I1202 21:37:54.501197  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:54.517471  483106 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:37:54.517510  483106 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:37:54.519079  483106 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:37:54.519117  483106 machine.go:94] provisionDockerMachine start ...
	I1202 21:37:54.519205  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.536086  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.536422  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.536437  483106 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:37:54.686523  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.686547  483106 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:37:54.686612  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.710674  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.710988  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.711037  483106 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:37:54.868253  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.868331  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.886749  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.887092  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.887115  483106 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:37:55.036431  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:37:55.036522  483106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:37:55.036593  483106 ubuntu.go:190] setting up certificates
	I1202 21:37:55.036621  483106 provision.go:84] configureAuth start
	I1202 21:37:55.036718  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:55.055483  483106 provision.go:143] copyHostCerts
	I1202 21:37:55.055534  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055575  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:37:55.055589  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055670  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:37:55.055775  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055797  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:37:55.055803  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055836  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:37:55.055880  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055901  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:37:55.055908  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055941  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:37:55.055998  483106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:37:55.445716  483106 provision.go:177] copyRemoteCerts
	I1202 21:37:55.445788  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:37:55.445829  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.462295  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:55.566646  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:37:55.566707  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:37:55.584230  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:37:55.584339  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:37:55.601138  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:37:55.601197  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:37:55.619092  483106 provision.go:87] duration metric: took 582.43702ms to configureAuth
	I1202 21:37:55.619117  483106 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:37:55.619308  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:55.619413  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.637231  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:55.637559  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:55.637573  483106 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:37:55.956144  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:37:55.956170  483106 machine.go:97] duration metric: took 1.437044454s to provisionDockerMachine
	I1202 21:37:55.956204  483106 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:37:55.956218  483106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:37:55.956294  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:37:55.956339  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.980756  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.091648  483106 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:37:56.095210  483106 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:37:56.095237  483106 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:37:56.095243  483106 command_runner.go:130] > VERSION_ID="12"
	I1202 21:37:56.095248  483106 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:37:56.095253  483106 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:37:56.095256  483106 command_runner.go:130] > ID=debian
	I1202 21:37:56.095270  483106 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:37:56.095275  483106 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:37:56.095281  483106 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:37:56.095363  483106 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:37:56.095385  483106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:37:56.095402  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:37:56.095457  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:37:56.095544  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:37:56.095557  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /etc/ssl/certs/4472112.pem
	I1202 21:37:56.095638  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:37:56.095647  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> /etc/test/nested/copy/447211/hosts
	I1202 21:37:56.095696  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:37:56.103392  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:56.120789  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:37:56.138613  483106 start.go:296] duration metric: took 182.392463ms for postStartSetup
	I1202 21:37:56.138692  483106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:37:56.138730  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.156335  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.255560  483106 command_runner.go:130] > 13%
	I1202 21:37:56.256083  483106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:37:56.260264  483106 command_runner.go:130] > 169G
	I1202 21:37:56.260703  483106 fix.go:56] duration metric: took 1.759824513s for fixHost
	I1202 21:37:56.260720  483106 start.go:83] releasing machines lock for "functional-066896", held for 1.759856579s
	I1202 21:37:56.260787  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:56.278034  483106 ssh_runner.go:195] Run: cat /version.json
	I1202 21:37:56.278057  483106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:37:56.278086  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.278126  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.294975  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.296343  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.394339  483106 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:37:56.394533  483106 ssh_runner.go:195] Run: systemctl --version
	I1202 21:37:56.493105  483106 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:37:56.493163  483106 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:37:56.493186  483106 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:37:56.493258  483106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:37:56.530464  483106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:37:56.534763  483106 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:37:56.534813  483106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:37:56.534914  483106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:37:56.542668  483106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:37:56.542693  483106 start.go:496] detecting cgroup driver to use...
	I1202 21:37:56.542754  483106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:37:56.542818  483106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:37:56.557769  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:37:56.570749  483106 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:37:56.570845  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:37:56.586179  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:37:56.599149  483106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:37:56.708191  483106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:37:56.842013  483106 docker.go:234] disabling docker service ...
	I1202 21:37:56.842082  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:37:56.857073  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:37:56.870370  483106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:37:56.987213  483106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:37:57.106635  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:37:57.119596  483106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:37:57.132314  483106 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 21:37:57.133557  483106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:37:57.133663  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.142404  483106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:37:57.142548  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.151265  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.160043  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.168450  483106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:37:57.177232  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.186240  483106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.194528  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.203498  483106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:37:57.209931  483106 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:37:57.210879  483106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:37:57.218360  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.328965  483106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:37:57.485223  483106 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:37:57.485296  483106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:37:57.489286  483106 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 21:37:57.489311  483106 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:37:57.489318  483106 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 21:37:57.489325  483106 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:57.489330  483106 command_runner.go:130] > Access: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489343  483106 command_runner.go:130] > Modify: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489348  483106 command_runner.go:130] > Change: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489352  483106 command_runner.go:130] >  Birth: -
	I1202 21:37:57.489576  483106 start.go:564] Will wait 60s for crictl version
	I1202 21:37:57.489633  483106 ssh_runner.go:195] Run: which crictl
	I1202 21:37:57.495444  483106 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:37:57.495541  483106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:37:57.522065  483106 command_runner.go:130] > Version:  0.1.0
	I1202 21:37:57.522330  483106 command_runner.go:130] > RuntimeName:  cri-o
	I1202 21:37:57.522612  483106 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 21:37:57.522814  483106 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:37:57.525085  483106 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:37:57.525167  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.560503  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.560529  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.560537  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.560542  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.560547  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.560551  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.560555  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.560560  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.560564  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.560568  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.560572  483106 command_runner.go:130] >      static
	I1202 21:37:57.560580  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.560584  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.560589  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.560595  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.560598  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.560603  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.560612  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.560616  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.560620  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.563007  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.589712  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.589787  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.589809  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.589825  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.589855  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.589880  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.589897  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.589914  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.589955  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.589975  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.589991  483106 command_runner.go:130] >      static
	I1202 21:37:57.590007  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.590023  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.590049  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.590069  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.590086  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.590103  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.590120  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.590146  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.590164  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.593809  483106 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:37:57.595025  483106 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:37:57.611773  483106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:37:57.615442  483106 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:37:57.615683  483106 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:37:57.615790  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:57.615841  483106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:37:57.645971  483106 command_runner.go:130] > {
	I1202 21:37:57.645994  483106 command_runner.go:130] >   "images":  [
	I1202 21:37:57.645998  483106 command_runner.go:130] >     {
	I1202 21:37:57.646007  483106 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:37:57.646011  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646017  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:37:57.646020  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646024  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646033  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 21:37:57.646036  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646041  483106 command_runner.go:130] >       "size":  "29035622",
	I1202 21:37:57.646045  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646049  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646052  483106 command_runner.go:130] >     },
	I1202 21:37:57.646054  483106 command_runner.go:130] >     {
	I1202 21:37:57.646060  483106 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:37:57.646068  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646074  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:37:57.646077  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646080  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646088  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 21:37:57.646096  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646101  483106 command_runner.go:130] >       "size":  "74488375",
	I1202 21:37:57.646105  483106 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:37:57.646109  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646112  483106 command_runner.go:130] >     },
	I1202 21:37:57.646115  483106 command_runner.go:130] >     {
	I1202 21:37:57.646121  483106 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:37:57.646124  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646129  483106 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:37:57.646132  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646136  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646147  483106 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 21:37:57.646150  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646157  483106 command_runner.go:130] >       "size":  "60854229",
	I1202 21:37:57.646161  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646165  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646168  483106 command_runner.go:130] >       },
	I1202 21:37:57.646172  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646175  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646178  483106 command_runner.go:130] >     },
	I1202 21:37:57.646181  483106 command_runner.go:130] >     {
	I1202 21:37:57.646187  483106 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:37:57.646191  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646196  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:37:57.646200  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646203  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646211  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 21:37:57.646216  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646220  483106 command_runner.go:130] >       "size":  "84947242",
	I1202 21:37:57.646223  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646227  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646230  483106 command_runner.go:130] >       },
	I1202 21:37:57.646234  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646238  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646241  483106 command_runner.go:130] >     },
	I1202 21:37:57.646243  483106 command_runner.go:130] >     {
	I1202 21:37:57.646250  483106 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:37:57.646253  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646259  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:37:57.646262  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646266  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646274  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 21:37:57.646277  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646285  483106 command_runner.go:130] >       "size":  "72167568",
	I1202 21:37:57.646289  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646292  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646299  483106 command_runner.go:130] >       },
	I1202 21:37:57.646305  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646309  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646313  483106 command_runner.go:130] >     },
	I1202 21:37:57.646316  483106 command_runner.go:130] >     {
	I1202 21:37:57.646322  483106 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:37:57.646326  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646331  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:37:57.646334  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646338  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646345  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 21:37:57.646348  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646352  483106 command_runner.go:130] >       "size":  "74105124",
	I1202 21:37:57.646356  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646360  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646363  483106 command_runner.go:130] >     },
	I1202 21:37:57.646365  483106 command_runner.go:130] >     {
	I1202 21:37:57.646372  483106 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:37:57.646375  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646381  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:37:57.646384  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646387  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646399  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 21:37:57.646403  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646406  483106 command_runner.go:130] >       "size":  "49819792",
	I1202 21:37:57.646409  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646413  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646416  483106 command_runner.go:130] >       },
	I1202 21:37:57.646421  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646424  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646427  483106 command_runner.go:130] >     },
	I1202 21:37:57.646430  483106 command_runner.go:130] >     {
	I1202 21:37:57.646436  483106 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:37:57.646443  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646447  483106 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.646450  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646454  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646461  483106 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 21:37:57.646464  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646468  483106 command_runner.go:130] >       "size":  "517328",
	I1202 21:37:57.646471  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646474  483106 command_runner.go:130] >         "value":  "65535"
	I1202 21:37:57.646477  483106 command_runner.go:130] >       },
	I1202 21:37:57.646481  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646485  483106 command_runner.go:130] >       "pinned":  true
	I1202 21:37:57.646488  483106 command_runner.go:130] >     }
	I1202 21:37:57.646491  483106 command_runner.go:130] >   ]
	I1202 21:37:57.646493  483106 command_runner.go:130] > }
	I1202 21:37:57.648114  483106 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:37:57.648141  483106 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:37:57.648149  483106 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:37:57.648254  483106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:37:57.648333  483106 ssh_runner.go:195] Run: crio config
	I1202 21:37:57.700265  483106 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 21:37:57.700298  483106 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 21:37:57.700306  483106 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 21:37:57.700310  483106 command_runner.go:130] > #
	I1202 21:37:57.700318  483106 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 21:37:57.700324  483106 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 21:37:57.700331  483106 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 21:37:57.700339  483106 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 21:37:57.700343  483106 command_runner.go:130] > # reload'.
	I1202 21:37:57.700350  483106 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 21:37:57.700357  483106 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 21:37:57.700363  483106 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 21:37:57.700373  483106 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 21:37:57.700376  483106 command_runner.go:130] > [crio]
	I1202 21:37:57.700387  483106 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 21:37:57.700395  483106 command_runner.go:130] > # containers images, in this directory.
	I1202 21:37:57.700407  483106 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 21:37:57.700421  483106 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 21:37:57.700427  483106 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 21:37:57.700434  483106 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 21:37:57.700447  483106 command_runner.go:130] > # imagestore = ""
	I1202 21:37:57.700456  483106 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 21:37:57.700462  483106 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 21:37:57.700469  483106 command_runner.go:130] > # storage_driver = "overlay"
	I1202 21:37:57.700475  483106 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 21:37:57.700484  483106 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 21:37:57.700488  483106 command_runner.go:130] > # storage_option = [
	I1202 21:37:57.700493  483106 command_runner.go:130] > # ]
	I1202 21:37:57.700499  483106 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 21:37:57.700508  483106 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 21:37:57.700513  483106 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 21:37:57.700520  483106 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 21:37:57.700528  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 21:37:57.700532  483106 command_runner.go:130] > # always happen on a node reboot
	I1202 21:37:57.700541  483106 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 21:37:57.700555  483106 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 21:37:57.700563  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 21:37:57.700568  483106 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 21:37:57.700573  483106 command_runner.go:130] > # version_file_persist = ""
	I1202 21:37:57.700587  483106 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 21:37:57.700595  483106 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 21:37:57.700603  483106 command_runner.go:130] > # internal_wipe = true
	I1202 21:37:57.700612  483106 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 21:37:57.700617  483106 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 21:37:57.700629  483106 command_runner.go:130] > # internal_repair = true
	I1202 21:37:57.700634  483106 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 21:37:57.700640  483106 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 21:37:57.700650  483106 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 21:37:57.700656  483106 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 21:37:57.700661  483106 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 21:37:57.700667  483106 command_runner.go:130] > [crio.api]
	I1202 21:37:57.700672  483106 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 21:37:57.700677  483106 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 21:37:57.700685  483106 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 21:37:57.700690  483106 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 21:37:57.700699  483106 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 21:37:57.700710  483106 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 21:37:57.700714  483106 command_runner.go:130] > # stream_port = "0"
	I1202 21:37:57.700720  483106 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 21:37:57.700725  483106 command_runner.go:130] > # stream_enable_tls = false
	I1202 21:37:57.700731  483106 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 21:37:57.700954  483106 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 21:37:57.700969  483106 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 21:37:57.700976  483106 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 21:37:57.700981  483106 command_runner.go:130] > # stream_tls_cert = ""
	I1202 21:37:57.700988  483106 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 21:37:57.700994  483106 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 21:37:57.701175  483106 command_runner.go:130] > # stream_tls_key = ""
	I1202 21:37:57.701188  483106 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 21:37:57.701195  483106 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 21:37:57.701200  483106 command_runner.go:130] > # automatically pick up the changes.
	I1202 21:37:57.701204  483106 command_runner.go:130] > # stream_tls_ca = ""
	I1202 21:37:57.701226  483106 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701255  483106 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 21:37:57.701272  483106 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701278  483106 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 21:37:57.701285  483106 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 21:37:57.701296  483106 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 21:37:57.701300  483106 command_runner.go:130] > [crio.runtime]
	I1202 21:37:57.701306  483106 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 21:37:57.701315  483106 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 21:37:57.701318  483106 command_runner.go:130] > # "nofile=1024:2048"
	I1202 21:37:57.701324  483106 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 21:37:57.701328  483106 command_runner.go:130] > # default_ulimits = [
	I1202 21:37:57.701331  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701338  483106 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 21:37:57.701348  483106 command_runner.go:130] > # no_pivot = false
	I1202 21:37:57.701354  483106 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 21:37:57.701360  483106 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 21:37:57.701368  483106 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 21:37:57.701374  483106 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 21:37:57.701385  483106 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 21:37:57.701395  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701399  483106 command_runner.go:130] > # conmon = ""
	I1202 21:37:57.701403  483106 command_runner.go:130] > # Cgroup setting for conmon
	I1202 21:37:57.701410  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 21:37:57.701414  483106 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 21:37:57.701420  483106 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 21:37:57.701425  483106 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 21:37:57.701432  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701438  483106 command_runner.go:130] > # conmon_env = [
	I1202 21:37:57.701441  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701447  483106 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 21:37:57.701459  483106 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 21:37:57.701465  483106 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 21:37:57.701470  483106 command_runner.go:130] > # default_env = [
	I1202 21:37:57.701475  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701481  483106 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 21:37:57.701491  483106 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 21:37:57.701495  483106 command_runner.go:130] > # selinux = false
	I1202 21:37:57.701501  483106 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 21:37:57.701509  483106 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 21:37:57.701516  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701526  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.701533  483106 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 21:37:57.701541  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701545  483106 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 21:37:57.701551  483106 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 21:37:57.701559  483106 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 21:37:57.701566  483106 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 21:37:57.701575  483106 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 21:37:57.701580  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701584  483106 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 21:37:57.701590  483106 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 21:37:57.701595  483106 command_runner.go:130] > # the cgroup blockio controller.
	I1202 21:37:57.701601  483106 command_runner.go:130] > # blockio_config_file = ""
	I1202 21:37:57.701608  483106 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 21:37:57.701614  483106 command_runner.go:130] > # blockio parameters.
	I1202 21:37:57.701618  483106 command_runner.go:130] > # blockio_reload = false
	I1202 21:37:57.701625  483106 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 21:37:57.701628  483106 command_runner.go:130] > # irqbalance daemon.
	I1202 21:37:57.701634  483106 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 21:37:57.701642  483106 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 21:37:57.701649  483106 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 21:37:57.701659  483106 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 21:37:57.701689  483106 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 21:37:57.701703  483106 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 21:37:57.701707  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701711  483106 command_runner.go:130] > # rdt_config_file = ""
	I1202 21:37:57.701717  483106 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 21:37:57.701723  483106 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 21:37:57.701730  483106 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 21:37:57.701736  483106 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 21:37:57.701742  483106 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 21:37:57.701751  483106 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 21:37:57.701755  483106 command_runner.go:130] > # will be added.
	I1202 21:37:57.701763  483106 command_runner.go:130] > # default_capabilities = [
	I1202 21:37:57.701968  483106 command_runner.go:130] > # 	"CHOWN",
	I1202 21:37:57.702017  483106 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 21:37:57.702029  483106 command_runner.go:130] > # 	"FSETID",
	I1202 21:37:57.702033  483106 command_runner.go:130] > # 	"FOWNER",
	I1202 21:37:57.702037  483106 command_runner.go:130] > # 	"SETGID",
	I1202 21:37:57.702040  483106 command_runner.go:130] > # 	"SETUID",
	I1202 21:37:57.702175  483106 command_runner.go:130] > # 	"SETPCAP",
	I1202 21:37:57.702197  483106 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 21:37:57.702202  483106 command_runner.go:130] > # 	"KILL",
	I1202 21:37:57.702205  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702213  483106 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 21:37:57.702220  483106 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 21:37:57.702225  483106 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 21:37:57.702232  483106 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 21:37:57.702247  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702251  483106 command_runner.go:130] > default_sysctls = [
	I1202 21:37:57.702282  483106 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 21:37:57.702290  483106 command_runner.go:130] > ]
	I1202 21:37:57.702302  483106 command_runner.go:130] > # List of devices on the host that a
	I1202 21:37:57.702309  483106 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 21:37:57.702317  483106 command_runner.go:130] > # allowed_devices = [
	I1202 21:37:57.702321  483106 command_runner.go:130] > # 	"/dev/fuse",
	I1202 21:37:57.702326  483106 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 21:37:57.702496  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702509  483106 command_runner.go:130] > # List of additional devices. specified as
	I1202 21:37:57.702523  483106 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 21:37:57.702529  483106 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 21:37:57.702539  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702546  483106 command_runner.go:130] > # additional_devices = [
	I1202 21:37:57.702553  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702559  483106 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 21:37:57.702562  483106 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 21:37:57.702593  483106 command_runner.go:130] > # 	"/etc/cdi",
	I1202 21:37:57.702605  483106 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 21:37:57.702609  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702616  483106 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 21:37:57.702632  483106 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 21:37:57.702636  483106 command_runner.go:130] > # Defaults to false.
	I1202 21:37:57.702641  483106 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 21:37:57.702647  483106 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 21:37:57.702655  483106 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 21:37:57.702659  483106 command_runner.go:130] > # hooks_dir = [
	I1202 21:37:57.702849  483106 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 21:37:57.702860  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702867  483106 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 21:37:57.702879  483106 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 21:37:57.702886  483106 command_runner.go:130] > # its default mounts from the following two files:
	I1202 21:37:57.702893  483106 command_runner.go:130] > #
	I1202 21:37:57.702899  483106 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 21:37:57.702905  483106 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 21:37:57.702911  483106 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 21:37:57.702913  483106 command_runner.go:130] > #
	I1202 21:37:57.702919  483106 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 21:37:57.702925  483106 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 21:37:57.702932  483106 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 21:37:57.702937  483106 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 21:37:57.702942  483106 command_runner.go:130] > #
	I1202 21:37:57.702974  483106 command_runner.go:130] > # default_mounts_file = ""
	I1202 21:37:57.702983  483106 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 21:37:57.702990  483106 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 21:37:57.703009  483106 command_runner.go:130] > # pids_limit = -1
	I1202 21:37:57.703018  483106 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 21:37:57.703024  483106 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 21:37:57.703030  483106 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 21:37:57.703039  483106 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 21:37:57.703043  483106 command_runner.go:130] > # log_size_max = -1
	I1202 21:37:57.703053  483106 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 21:37:57.703070  483106 command_runner.go:130] > # log_to_journald = false
	I1202 21:37:57.703082  483106 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 21:37:57.703090  483106 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 21:37:57.703102  483106 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 21:37:57.703112  483106 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 21:37:57.703121  483106 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 21:37:57.703294  483106 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 21:37:57.703314  483106 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 21:37:57.703388  483106 command_runner.go:130] > # read_only = false
	I1202 21:37:57.703403  483106 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 21:37:57.703410  483106 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 21:37:57.703414  483106 command_runner.go:130] > # live configuration reload.
	I1202 21:37:57.703418  483106 command_runner.go:130] > # log_level = "info"
	I1202 21:37:57.703429  483106 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 21:37:57.703434  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.703441  483106 command_runner.go:130] > # log_filter = ""
	I1202 21:37:57.703448  483106 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703456  483106 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 21:37:57.703459  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703467  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703471  483106 command_runner.go:130] > # uid_mappings = ""
	I1202 21:37:57.703477  483106 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703489  483106 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 21:37:57.703492  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703500  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703504  483106 command_runner.go:130] > # gid_mappings = ""
	I1202 21:37:57.703510  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 21:37:57.703518  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703524  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703532  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703561  483106 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 21:37:57.703582  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 21:37:57.703590  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703596  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703606  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703769  483106 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 21:37:57.703787  483106 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 21:37:57.703803  483106 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 21:37:57.703810  483106 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 21:37:57.703970  483106 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 21:37:57.703985  483106 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 21:37:57.703996  483106 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 21:37:57.704002  483106 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 21:37:57.704010  483106 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 21:37:57.704013  483106 command_runner.go:130] > # drop_infra_ctr = true
	I1202 21:37:57.704023  483106 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 21:37:57.704035  483106 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 21:37:57.704043  483106 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 21:37:57.704046  483106 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 21:37:57.704053  483106 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 21:37:57.704059  483106 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 21:37:57.704066  483106 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 21:37:57.704073  483106 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 21:37:57.704077  483106 command_runner.go:130] > # shared_cpuset = ""
	I1202 21:37:57.704088  483106 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 21:37:57.704094  483106 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 21:37:57.704098  483106 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 21:37:57.704111  483106 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 21:37:57.704115  483106 command_runner.go:130] > # pinns_path = ""
	I1202 21:37:57.704126  483106 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 21:37:57.704133  483106 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 21:37:57.704159  483106 command_runner.go:130] > # enable_criu_support = true
	I1202 21:37:57.704170  483106 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 21:37:57.704177  483106 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 21:37:57.704281  483106 command_runner.go:130] > # enable_pod_events = false
	I1202 21:37:57.704302  483106 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 21:37:57.704308  483106 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 21:37:57.704428  483106 command_runner.go:130] > # default_runtime = "crun"
	I1202 21:37:57.704441  483106 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 21:37:57.704455  483106 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 21:37:57.704470  483106 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 21:37:57.704476  483106 command_runner.go:130] > # creation as a file is not desired either.
	I1202 21:37:57.704485  483106 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 21:37:57.704501  483106 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 21:37:57.704506  483106 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 21:37:57.704638  483106 command_runner.go:130] > # ]
	I1202 21:37:57.704649  483106 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 21:37:57.704656  483106 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 21:37:57.704663  483106 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 21:37:57.704668  483106 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 21:37:57.704671  483106 command_runner.go:130] > #
	I1202 21:37:57.704676  483106 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 21:37:57.704681  483106 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 21:37:57.704688  483106 command_runner.go:130] > # runtime_type = "oci"
	I1202 21:37:57.704693  483106 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 21:37:57.704697  483106 command_runner.go:130] > # inherit_default_runtime = false
	I1202 21:37:57.704710  483106 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 21:37:57.704715  483106 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 21:37:57.704720  483106 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 21:37:57.704728  483106 command_runner.go:130] > # monitor_env = []
	I1202 21:37:57.704733  483106 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 21:37:57.704737  483106 command_runner.go:130] > # allowed_annotations = []
	I1202 21:37:57.704743  483106 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 21:37:57.704749  483106 command_runner.go:130] > # no_sync_log = false
	I1202 21:37:57.704753  483106 command_runner.go:130] > # default_annotations = {}
	I1202 21:37:57.704757  483106 command_runner.go:130] > # stream_websockets = false
	I1202 21:37:57.704761  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.704791  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.704803  483106 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 21:37:57.704810  483106 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 21:37:57.704816  483106 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 21:37:57.704822  483106 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 21:37:57.704828  483106 command_runner.go:130] > #   in $PATH.
	I1202 21:37:57.704835  483106 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 21:37:57.704844  483106 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 21:37:57.704850  483106 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 21:37:57.704853  483106 command_runner.go:130] > #   state.
	I1202 21:37:57.704859  483106 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 21:37:57.704870  483106 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 21:37:57.704879  483106 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 21:37:57.704885  483106 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 21:37:57.704891  483106 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 21:37:57.704899  483106 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 21:37:57.704907  483106 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 21:37:57.704917  483106 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 21:37:57.704923  483106 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 21:37:57.704931  483106 command_runner.go:130] > #   The currently recognized values are:
	I1202 21:37:57.704940  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 21:37:57.704947  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 21:37:57.704954  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 21:37:57.704962  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 21:37:57.704969  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 21:37:57.704978  483106 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 21:37:57.704985  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 21:37:57.704992  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 21:37:57.705001  483106 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 21:37:57.705008  483106 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 21:37:57.705017  483106 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 21:37:57.705023  483106 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 21:37:57.705029  483106 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 21:37:57.705035  483106 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 21:37:57.705045  483106 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 21:37:57.705054  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 21:37:57.705068  483106 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 21:37:57.705072  483106 command_runner.go:130] > #   deprecated option "conmon".
	I1202 21:37:57.705080  483106 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 21:37:57.705088  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 21:37:57.705095  483106 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 21:37:57.705101  483106 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 21:37:57.705108  483106 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 21:37:57.705113  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 21:37:57.705129  483106 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 21:37:57.705135  483106 command_runner.go:130] > #   conmon-rs by using:
	I1202 21:37:57.705143  483106 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 21:37:57.705154  483106 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 21:37:57.705165  483106 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 21:37:57.705176  483106 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 21:37:57.705183  483106 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 21:37:57.705191  483106 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 21:37:57.705198  483106 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 21:37:57.705203  483106 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 21:37:57.705214  483106 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 21:37:57.705222  483106 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 21:37:57.705228  483106 command_runner.go:130] > #   when a machine crash happens.
	I1202 21:37:57.705235  483106 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 21:37:57.705243  483106 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 21:37:57.705253  483106 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 21:37:57.705257  483106 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 21:37:57.705263  483106 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 21:37:57.705273  483106 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 21:37:57.705275  483106 command_runner.go:130] > #
	I1202 21:37:57.705280  483106 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 21:37:57.705285  483106 command_runner.go:130] > #
	I1202 21:37:57.705292  483106 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 21:37:57.705301  483106 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 21:37:57.705304  483106 command_runner.go:130] > #
	I1202 21:37:57.705310  483106 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 21:37:57.705317  483106 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 21:37:57.705322  483106 command_runner.go:130] > #
	I1202 21:37:57.705328  483106 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 21:37:57.705331  483106 command_runner.go:130] > # feature.
	I1202 21:37:57.705336  483106 command_runner.go:130] > #
	I1202 21:37:57.705342  483106 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 21:37:57.705350  483106 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 21:37:57.705360  483106 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 21:37:57.705367  483106 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 21:37:57.705375  483106 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 21:37:57.705382  483106 command_runner.go:130] > #
	I1202 21:37:57.705388  483106 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 21:37:57.705397  483106 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 21:37:57.705399  483106 command_runner.go:130] > #
	I1202 21:37:57.705405  483106 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 21:37:57.705411  483106 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 21:37:57.705416  483106 command_runner.go:130] > #
	I1202 21:37:57.705422  483106 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 21:37:57.705428  483106 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 21:37:57.705433  483106 command_runner.go:130] > # limitation.
	I1202 21:37:57.705469  483106 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 21:37:57.705480  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 21:37:57.705484  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705488  483106 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 21:37:57.705492  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705499  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705503  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705510  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705514  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705518  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705521  483106 command_runner.go:130] > allowed_annotations = [
	I1202 21:37:57.705734  483106 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 21:37:57.705745  483106 command_runner.go:130] > ]
	I1202 21:37:57.705770  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705779  483106 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 21:37:57.705849  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 21:37:57.705872  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705883  483106 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 21:37:57.705901  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705906  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705910  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705915  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705921  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705925  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705929  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705937  483106 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 21:37:57.705944  483106 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 21:37:57.705965  483106 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 21:37:57.705974  483106 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 21:37:57.705985  483106 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 21:37:57.706000  483106 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 21:37:57.706009  483106 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 21:37:57.706015  483106 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 21:37:57.706025  483106 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 21:37:57.706051  483106 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 21:37:57.706057  483106 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 21:37:57.706077  483106 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 21:37:57.706082  483106 command_runner.go:130] > # Example:
	I1202 21:37:57.706087  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 21:37:57.706091  483106 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 21:37:57.706096  483106 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 21:37:57.706102  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 21:37:57.706105  483106 command_runner.go:130] > # cpuset = "0-1"
	I1202 21:37:57.706108  483106 command_runner.go:130] > # cpushares = "5"
	I1202 21:37:57.706112  483106 command_runner.go:130] > # cpuquota = "1000"
	I1202 21:37:57.706116  483106 command_runner.go:130] > # cpuperiod = "100000"
	I1202 21:37:57.706120  483106 command_runner.go:130] > # cpulimit = "35"
	I1202 21:37:57.706126  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.706131  483106 command_runner.go:130] > # The workload name is workload-type.
	I1202 21:37:57.706143  483106 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 21:37:57.706160  483106 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 21:37:57.706180  483106 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 21:37:57.706189  483106 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 21:37:57.706195  483106 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 21:37:57.706229  483106 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 21:37:57.706243  483106 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 21:37:57.706247  483106 command_runner.go:130] > # Default value is set to true
	I1202 21:37:57.706253  483106 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 21:37:57.706261  483106 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 21:37:57.706266  483106 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 21:37:57.706271  483106 command_runner.go:130] > # Default value is set to 'false'
	I1202 21:37:57.706275  483106 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 21:37:57.706280  483106 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 21:37:57.706291  483106 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 21:37:57.706299  483106 command_runner.go:130] > # timezone = ""
	I1202 21:37:57.706306  483106 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 21:37:57.706308  483106 command_runner.go:130] > #
	I1202 21:37:57.706315  483106 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 21:37:57.706326  483106 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 21:37:57.706329  483106 command_runner.go:130] > [crio.image]
	I1202 21:37:57.706338  483106 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 21:37:57.706348  483106 command_runner.go:130] > # default_transport = "docker://"
	I1202 21:37:57.706354  483106 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 21:37:57.706360  483106 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706497  483106 command_runner.go:130] > # global_auth_file = ""
	I1202 21:37:57.706512  483106 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 21:37:57.706518  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706617  483106 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.706659  483106 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 21:37:57.706671  483106 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706677  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706682  483106 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 21:37:57.706688  483106 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 21:37:57.706698  483106 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 21:37:57.706714  483106 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 21:37:57.706730  483106 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 21:37:57.706734  483106 command_runner.go:130] > # pause_command = "/pause"
	I1202 21:37:57.706749  483106 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 21:37:57.706756  483106 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 21:37:57.706771  483106 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 21:37:57.706777  483106 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 21:37:57.706783  483106 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 21:37:57.706791  483106 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 21:37:57.706795  483106 command_runner.go:130] > # pinned_images = [
	I1202 21:37:57.706798  483106 command_runner.go:130] > # ]
	I1202 21:37:57.706806  483106 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 21:37:57.706813  483106 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 21:37:57.706822  483106 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 21:37:57.706828  483106 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 21:37:57.706834  483106 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 21:37:57.707022  483106 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 21:37:57.707046  483106 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 21:37:57.707056  483106 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 21:37:57.707066  483106 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 21:37:57.707073  483106 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 21:37:57.707084  483106 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 21:37:57.707105  483106 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 21:37:57.707129  483106 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 21:37:57.707141  483106 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 21:37:57.707146  483106 command_runner.go:130] > # changing them here.
	I1202 21:37:57.707158  483106 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 21:37:57.707163  483106 command_runner.go:130] > # insecure_registries = [
	I1202 21:37:57.707278  483106 command_runner.go:130] > # ]
	I1202 21:37:57.707303  483106 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 21:37:57.707309  483106 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 21:37:57.707323  483106 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 21:37:57.707334  483106 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 21:37:57.707518  483106 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 21:37:57.707543  483106 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 21:37:57.707551  483106 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 21:37:57.707565  483106 command_runner.go:130] > # auto_reload_registries = false
	I1202 21:37:57.707577  483106 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 21:37:57.707586  483106 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 21:37:57.707593  483106 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 21:37:57.707601  483106 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 21:37:57.707626  483106 command_runner.go:130] > # The mode of short name resolution.
	I1202 21:37:57.707639  483106 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 21:37:57.707646  483106 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 21:37:57.707652  483106 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 21:37:57.707737  483106 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 21:37:57.707776  483106 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 21:37:57.707797  483106 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 21:37:57.707804  483106 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 21:37:57.707810  483106 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 21:37:57.707814  483106 command_runner.go:130] > # CNI plugins.
	I1202 21:37:57.707818  483106 command_runner.go:130] > [crio.network]
	I1202 21:37:57.707825  483106 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 21:37:57.707834  483106 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 21:37:57.707838  483106 command_runner.go:130] > # cni_default_network = ""
	I1202 21:37:57.707843  483106 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 21:37:57.707880  483106 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 21:37:57.707894  483106 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 21:37:57.707898  483106 command_runner.go:130] > # plugin_dirs = [
	I1202 21:37:57.708100  483106 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 21:37:57.708328  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708337  483106 command_runner.go:130] > # List of included pod metrics.
	I1202 21:37:57.708504  483106 command_runner.go:130] > # included_pod_metrics = [
	I1202 21:37:57.708692  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708716  483106 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 21:37:57.708721  483106 command_runner.go:130] > [crio.metrics]
	I1202 21:37:57.708725  483106 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 21:37:57.709042  483106 command_runner.go:130] > # enable_metrics = false
	I1202 21:37:57.709050  483106 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 21:37:57.709056  483106 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 21:37:57.709063  483106 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 21:37:57.709070  483106 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 21:37:57.709082  483106 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 21:37:57.709226  483106 command_runner.go:130] > # metrics_collectors = [
	I1202 21:37:57.709424  483106 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 21:37:57.709616  483106 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 21:37:57.709807  483106 command_runner.go:130] > # 	"containers_oom_total",
	I1202 21:37:57.709999  483106 command_runner.go:130] > # 	"processes_defunct",
	I1202 21:37:57.710186  483106 command_runner.go:130] > # 	"operations_total",
	I1202 21:37:57.710377  483106 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 21:37:57.710569  483106 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 21:37:57.710759  483106 command_runner.go:130] > # 	"operations_errors_total",
	I1202 21:37:57.710953  483106 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 21:37:57.711154  483106 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 21:37:57.711347  483106 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 21:37:57.711541  483106 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 21:37:57.711734  483106 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 21:37:57.711929  483106 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 21:37:57.712114  483106 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 21:37:57.712326  483106 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 21:37:57.712521  483106 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 21:37:57.712708  483106 command_runner.go:130] > # ]
	I1202 21:37:57.712718  483106 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 21:37:57.713101  483106 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 21:37:57.713111  483106 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 21:37:57.713462  483106 command_runner.go:130] > # metrics_port = 9090
	I1202 21:37:57.713472  483106 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 21:37:57.713766  483106 command_runner.go:130] > # metrics_socket = ""
	I1202 21:37:57.713798  483106 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 21:37:57.713843  483106 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 21:37:57.713867  483106 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 21:37:57.713890  483106 command_runner.go:130] > # certificate on any modification event.
	I1202 21:37:57.714026  483106 command_runner.go:130] > # metrics_cert = ""
	I1202 21:37:57.714049  483106 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 21:37:57.714055  483106 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 21:37:57.714333  483106 command_runner.go:130] > # metrics_key = ""
	I1202 21:37:57.714367  483106 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 21:37:57.714411  483106 command_runner.go:130] > [crio.tracing]
	I1202 21:37:57.714434  483106 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 21:37:57.714690  483106 command_runner.go:130] > # enable_tracing = false
	I1202 21:37:57.714730  483106 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 21:37:57.715040  483106 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 21:37:57.715074  483106 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 21:37:57.715400  483106 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 21:37:57.715424  483106 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 21:37:57.715465  483106 command_runner.go:130] > [crio.nri]
	I1202 21:37:57.715486  483106 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 21:37:57.715706  483106 command_runner.go:130] > # enable_nri = true
	I1202 21:37:57.715731  483106 command_runner.go:130] > # NRI socket to listen on.
	I1202 21:37:57.716042  483106 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 21:37:57.716072  483106 command_runner.go:130] > # NRI plugin directory to use.
	I1202 21:37:57.716381  483106 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 21:37:57.716412  483106 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 21:37:57.716702  483106 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 21:37:57.716734  483106 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 21:37:57.716910  483106 command_runner.go:130] > # nri_disable_connections = false
	I1202 21:37:57.716983  483106 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 21:37:57.717007  483106 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 21:37:57.717025  483106 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 21:37:57.717040  483106 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 21:37:57.717084  483106 command_runner.go:130] > # NRI default validator configuration.
	I1202 21:37:57.717109  483106 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 21:37:57.717127  483106 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 21:37:57.717180  483106 command_runner.go:130] > # can be restricted/rejected:
	I1202 21:37:57.717207  483106 command_runner.go:130] > # - OCI hook injection
	I1202 21:37:57.717238  483106 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 21:37:57.717387  483106 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 21:37:57.717408  483106 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 21:37:57.717448  483106 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 21:37:57.717469  483106 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 21:37:57.717489  483106 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 21:37:57.717520  483106 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 21:37:57.717542  483106 command_runner.go:130] > #
	I1202 21:37:57.717559  483106 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 21:37:57.717588  483106 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 21:37:57.717614  483106 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 21:37:57.717634  483106 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 21:37:57.717673  483106 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 21:37:57.717700  483106 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 21:37:57.717721  483106 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 21:37:57.717750  483106 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 21:37:57.717775  483106 command_runner.go:130] > # ]
	I1202 21:37:57.717791  483106 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 21:37:57.717809  483106 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 21:37:57.717844  483106 command_runner.go:130] > [crio.stats]
	I1202 21:37:57.717862  483106 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 21:37:57.717880  483106 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 21:37:57.717896  483106 command_runner.go:130] > # stats_collection_period = 0
	I1202 21:37:57.717933  483106 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 21:37:57.717955  483106 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 21:37:57.717969  483106 command_runner.go:130] > # collection_period = 0
	I1202 21:37:57.719581  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.679996811Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 21:37:57.719602  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680035195Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 21:37:57.719612  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680068245Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 21:37:57.719634  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680094978Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 21:37:57.719650  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680175192Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.719661  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680551245Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 21:37:57.719673  483106 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 21:37:57.719793  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:57.719806  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:57.719822  483106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:37:57.719854  483106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:37:57.719977  483106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:37:57.720050  483106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:37:57.727128  483106 command_runner.go:130] > kubeadm
	I1202 21:37:57.727200  483106 command_runner.go:130] > kubectl
	I1202 21:37:57.727217  483106 command_runner.go:130] > kubelet
	I1202 21:37:57.727679  483106 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:37:57.727758  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:37:57.735128  483106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:37:57.747401  483106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:37:57.759635  483106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 21:37:57.772168  483106 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:37:57.775704  483106 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:37:57.775781  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.892482  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:58.414394  483106 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:37:58.414415  483106 certs.go:195] generating shared ca certs ...
	I1202 21:37:58.414431  483106 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:58.414617  483106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:37:58.414690  483106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:37:58.414702  483106 certs.go:257] generating profile certs ...
	I1202 21:37:58.414822  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:37:58.414884  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:37:58.414927  483106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:37:58.414939  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:37:58.414953  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:37:58.414964  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:37:58.414980  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:37:58.414991  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:37:58.415019  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:37:58.415030  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:37:58.415042  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:37:58.415094  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:37:58.415127  483106 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:37:58.415140  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:37:58.415171  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:37:58.415199  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:37:58.415223  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:37:58.415279  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:58.415327  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.415344  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem -> /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.415358  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.415948  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:37:58.434575  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:37:58.454217  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:37:58.476636  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:37:58.499852  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:37:58.517799  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:37:58.537626  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:37:58.556051  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:37:58.573621  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:37:58.591561  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:37:58.609240  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:37:58.626214  483106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:37:58.638898  483106 ssh_runner.go:195] Run: openssl version
	I1202 21:37:58.644941  483106 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:37:58.645379  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:37:58.653758  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657242  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657279  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657350  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.697450  483106 command_runner.go:130] > b5213941
	I1202 21:37:58.697880  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:37:58.705830  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:37:58.714550  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718238  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718320  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718390  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.760939  483106 command_runner.go:130] > 51391683
	I1202 21:37:58.761409  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:37:58.769112  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:37:58.777300  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780878  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780914  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780988  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.821311  483106 command_runner.go:130] > 3ec20f2e
	I1202 21:37:58.821773  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:37:58.829482  483106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833099  483106 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833249  483106 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:37:58.833277  483106 command_runner.go:130] > Device: 259,1	Inode: 1309045     Links: 1
	I1202 21:37:58.833296  483106 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:58.833318  483106 command_runner.go:130] > Access: 2025-12-02 21:33:51.106313964 +0000
	I1202 21:37:58.833335  483106 command_runner.go:130] > Modify: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833354  483106 command_runner.go:130] > Change: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833368  483106 command_runner.go:130] >  Birth: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833452  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:37:58.873701  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.874162  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:37:58.914810  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.915281  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:37:58.957479  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.957884  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:37:58.998366  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.998755  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:37:59.041919  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.042032  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:37:59.082406  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.082849  483106 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:59.082947  483106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:37:59.083063  483106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:37:59.109816  483106 cri.go:89] found id: ""
	I1202 21:37:59.109903  483106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:37:59.116871  483106 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:37:59.116937  483106 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:37:59.116958  483106 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:37:59.117791  483106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:37:59.117835  483106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:37:59.117913  483106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:37:59.125060  483106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:37:59.125506  483106 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.125617  483106 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-444114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-066896" cluster setting kubeconfig missing "functional-066896" context setting]
	I1202 21:37:59.125900  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.126337  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.126509  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.127095  483106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:37:59.127116  483106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:37:59.127122  483106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:37:59.127127  483106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:37:59.127133  483106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:37:59.127170  483106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:37:59.127484  483106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:37:59.134957  483106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:37:59.134991  483106 kubeadm.go:602] duration metric: took 17.137902ms to restartPrimaryControlPlane
	I1202 21:37:59.135014  483106 kubeadm.go:403] duration metric: took 52.172876ms to StartCluster
	I1202 21:37:59.135029  483106 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135086  483106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.135727  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135915  483106 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:37:59.136175  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:59.136232  483106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:37:59.136325  483106 addons.go:70] Setting storage-provisioner=true in profile "functional-066896"
	I1202 21:37:59.136339  483106 addons.go:239] Setting addon storage-provisioner=true in "functional-066896"
	I1202 21:37:59.136375  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.136437  483106 addons.go:70] Setting default-storageclass=true in profile "functional-066896"
	I1202 21:37:59.136458  483106 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-066896"
	I1202 21:37:59.136761  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.136798  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.139277  483106 out.go:179] * Verifying Kubernetes components...
	I1202 21:37:59.140771  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:59.165976  483106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:37:59.168845  483106 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.168870  483106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:37:59.168937  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.175656  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.176018  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.176385  483106 addons.go:239] Setting addon default-storageclass=true in "functional-066896"
	I1202 21:37:59.176428  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.176909  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.211203  483106 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:37:59.211229  483106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:37:59.211311  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.225207  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.248989  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.349954  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:59.407494  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.408663  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.165713  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165766  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165797  483106 retry.go:31] will retry after 202.822033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165873  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165889  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165899  483106 retry.go:31] will retry after 281.773783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.166009  483106 node_ready.go:35] waiting up to 6m0s for node "functional-066896" to be "Ready" ...
	I1202 21:38:00.166135  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.166200  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.368900  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.441989  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.442041  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.442063  483106 retry.go:31] will retry after 393.334545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.448331  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.512520  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.512571  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.512592  483106 retry.go:31] will retry after 493.57139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.666814  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.667270  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.835693  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.896509  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.896567  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.896588  483106 retry.go:31] will retry after 517.359335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.006926  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.069882  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.069952  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.069980  483106 retry.go:31] will retry after 823.867865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.167068  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.167622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.415018  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:01.473591  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.473646  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.473665  483106 retry.go:31] will retry after 817.290744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.666990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.667103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.894929  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.964144  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.967581  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.967615  483106 retry.go:31] will retry after 586.961084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.167465  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:02.167512  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:02.292000  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:02.348780  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.352211  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.352246  483106 retry.go:31] will retry after 1.098539896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.555610  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:02.616881  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.616985  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.617011  483106 retry.go:31] will retry after 1.090026315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.667191  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.667272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.667575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.451026  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:03.515404  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.515439  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.515458  483106 retry.go:31] will retry after 2.58724354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.666944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.667328  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.707632  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:03.776872  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.776924  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.776953  483106 retry.go:31] will retry after 972.290717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.166626  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.166706  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:04.666777  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.666867  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.667243  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:04.667303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:04.749460  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:04.810694  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:04.810734  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.810752  483106 retry.go:31] will retry after 3.951899284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:05.166161  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.166235  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.166558  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:05.666140  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.666212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.102988  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:06.161220  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:06.161263  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.161284  483106 retry.go:31] will retry after 3.838527337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.166366  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.166444  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.666314  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.666386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:07.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.166299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:07.166671  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:07.666338  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.666425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.666777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.166503  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.166606  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.166933  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.666295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.666603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.763053  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:08.821648  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:08.821701  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:08.821721  483106 retry.go:31] will retry after 4.430309202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:09.166538  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.166615  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.166964  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:09.167037  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:09.666806  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.666904  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.667263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.001423  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:10.065960  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:10.069561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.069595  483106 retry.go:31] will retry after 4.835447081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.166750  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.166827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.167127  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.666978  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.667076  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.667385  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:11.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.167266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.167557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:11.167608  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:11.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.666586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.166242  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.167025  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.167092  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.167359  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.252779  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:13.311539  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:13.314561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.314593  483106 retry.go:31] will retry after 7.77807994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.667097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.667178  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.667555  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:13.667614  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:14.166435  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.166532  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.166857  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.666157  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.666230  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.906038  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:14.963486  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:14.966545  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:14.966583  483106 retry.go:31] will retry after 9.105443561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:15.166926  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.167018  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.167368  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:15.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.666221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.666564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:16.166892  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.167321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:16.167385  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:16.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.667311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.667666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.166271  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.166345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.166811  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.666246  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.666576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.166665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.666398  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.666474  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.666809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:18.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:19.167020  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.167103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.167423  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:19.666169  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.666247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.166216  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.166296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.166641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.666328  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.666687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:21.093408  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:21.149979  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:21.153644  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.153677  483106 retry.go:31] will retry after 11.903983297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.166790  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.167199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:21.167253  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:21.666923  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.667013  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.667352  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.166588  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.166661  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.166957  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.666921  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.667250  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:23.167035  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.167114  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.167459  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:23.167514  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:23.666741  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.666815  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.072876  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:24.134664  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:24.134721  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.134742  483106 retry.go:31] will retry after 11.08333461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.166922  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.166990  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.167311  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.667038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.667366  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.167335  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.667220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.667299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.667607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:25.667651  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:26.166305  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.166387  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.166780  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:26.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.666584  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.166286  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.166358  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.666223  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.666297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.666627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:28.166866  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.166938  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:28.167314  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:28.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.667185  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.166534  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.166912  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.666220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.666294  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.666610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.166321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.166409  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:30.666751  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:31.166158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.166232  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.166500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:31.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.666300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.166362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.666462  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:32.666785  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:33.058732  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:33.133401  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:33.133437  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.133456  483106 retry.go:31] will retry after 7.836153133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.166617  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.167044  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:33.666857  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.666928  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.166841  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.166919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.666992  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.667107  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.667433  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:34.667486  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:35.166145  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.166224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.166561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:35.218798  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:35.277107  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:35.277160  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.277179  483106 retry.go:31] will retry after 18.212486347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.666236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.666575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.166236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.166653  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.666345  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.666418  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.666776  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:37.166874  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.166942  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.167236  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:37.167279  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:37.667058  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.667144  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.167192  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.167270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.167629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.166835  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.166911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.167230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.667062  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.667449  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:39.667503  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:40.166787  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.666946  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.667046  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.667374  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.969813  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:41.027522  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:41.030695  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.030727  483106 retry.go:31] will retry after 26.445141412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.167017  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.167086  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.167412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:41.667158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.667226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.667538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:41.667593  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:42.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:42.666412  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.666487  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.666864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.167082  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.167382  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.667222  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.667290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.667605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:43.667663  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:44.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.167048  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:44.666563  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.666635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.666906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.166291  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.666557  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.666637  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.666980  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:46.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.166248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.166526  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:46.166568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:46.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.666372  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.166454  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.166529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.166849  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.667114  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.667196  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.667500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:48.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.166278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.166598  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:48.166644  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:48.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.166918  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.166985  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.167265  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.667124  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:50.167148  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:50.167600  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:50.666859  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.666941  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.667348  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.166149  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.666321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.666742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.167091  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.167502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.666630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:52.666682  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:53.166365  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.166440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.166743  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:53.490393  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:53.549126  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:53.552379  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.552413  483106 retry.go:31] will retry after 28.270272942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.666480  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.166899  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.166977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.167310  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.667106  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.667183  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.667452  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:54.667501  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:55.166711  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.166784  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.167096  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:55.666915  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.666986  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.667321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.167141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.167212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.167527  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:57.166258  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:57.166735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:57.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.167097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.167360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.667123  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.667203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.667560  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:59.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.166930  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:59.166985  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:59.666233  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.666305  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.166345  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.166424  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.166735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.666605  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.666696  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.667071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:01.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.166920  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.167258  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:01.167303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:01.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.667514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.166308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.666901  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.666977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.667267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:03.167047  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.167126  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.167463  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:03.167519  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:03.667138  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.667208  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.667536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.166363  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.166711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.666264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.666699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.166480  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.166807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.666607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:05.666654  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:06.166221  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:06.666253  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.666658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.166933  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.167016  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.167275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.476950  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:07.537734  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:07.540988  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.541021  483106 retry.go:31] will retry after 43.142584555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.666246  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:07.666721  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:08.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.166806  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:08.666497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.666831  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.167081  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.167424  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.666233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:10.166170  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.166240  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.166510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:10.166560  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:10.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.166219  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.166624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.666147  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.666484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:12.166223  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.166293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.166617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:12.166680  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:12.666258  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.167106  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.167177  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.167479  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.666184  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.666262  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:14.166400  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.166473  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:14.166879  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:14.666975  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.667061  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.667380  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.167173  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.167254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.167549  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.666659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.166211  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.166592  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.666667  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:17.166399  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.166790  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:17.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.166356  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.166694  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.666433  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.666858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:18.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:19.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.167267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:19.667092  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.667166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.667486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.166275  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.166627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.666762  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.666831  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.667148  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:20.667207  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:21.166923  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.167030  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.167353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.667178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.667576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.822959  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:39:21.878670  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878722  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878822  483106 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:22.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.167188  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:22.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.666649  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:23.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.166385  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.166692  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:23.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:23.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.166668  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.166744  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.167080  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.666918  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.667347  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:25.166732  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.166798  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.167094  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:25.167141  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:25.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.167051  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.167153  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.167485  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.666270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.166562  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.666353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:27.666775  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:28.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.166268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:28.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.666620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.166566  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.166638  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.166966  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.666571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:30.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:30.166748  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:30.666468  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.666548  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.666896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.166188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.166269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.166537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:32.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.166483  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.166797  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:32.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:32.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.666570  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.166360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.666426  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.666501  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.666838  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:34.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.166641  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.166906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:34.166954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:34.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.166396  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.667133  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.667396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:36.167160  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.167234  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.167571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:36.167629  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:36.666296  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.666373  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.167008  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.167074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.167365  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.667188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.667263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.667557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.166608  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.666617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:38.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:39.166799  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.166866  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.167214  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:39.666873  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.666945  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.166544  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.666645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:40.666705  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:41.166392  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.166467  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:41.667109  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.667193  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.667456  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.166704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.666430  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.666507  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.666850  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:42.666912  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:43.166126  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.166198  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.166502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:43.666218  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.166582  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.166676  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.167019  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.666769  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.666837  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:44.667165  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:45.167137  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.167219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.167616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:45.666336  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.666407  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.666753  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.166918  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.666991  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.667084  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.667426  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:46.667487  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:47.166178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.166572  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:47.666176  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.666257  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.666519  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.166237  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.166320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.666685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:49.166761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.167141  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:49.167190  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:49.667042  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.667119  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.667437  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.166247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.666738  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.666823  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.667106  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.684445  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:50.752913  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.752959  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.753053  483106 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:50.754872  483106 out.go:179] * Enabled addons: 
	I1202 21:39:50.756298  483106 addons.go:530] duration metric: took 1m51.620061888s for enable addons: enabled=[]
	I1202 21:39:51.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.166426  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.166756  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:51.666472  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.666542  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:51.666948  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:52.167023  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.167094  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:52.666886  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.666958  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.667302  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.167134  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.167525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.667191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.667443  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:53.667482  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:54.166580  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.166653  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:54.666761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.666832  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.667157  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.166643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:56.166424  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.166496  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:56.166886  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:56.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.166233  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.166658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.666359  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.666436  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.666730  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.166410  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.166495  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.166819  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.666570  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.666669  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:58.667176  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:59.166497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.166577  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:59.667069  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.667455  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.166967  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.666805  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.666883  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.667412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:00.667479  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:01.166600  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.166671  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.167071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:01.666865  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.666943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.667324  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.167126  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.167206  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.167585  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.666525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:03.166226  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.166298  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.166603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:03.166657  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:03.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.166563  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:05.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.166503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.166802  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:05.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:05.666560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.666632  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.666917  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.166784  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.166862  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.167188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.666980  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.667073  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.667410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:07.167168  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.167242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.167577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:07.167637  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:07.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.166347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.666848  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.666917  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.667201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.167192  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.167533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:09.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:10.166218  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.166297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.166630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:10.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.166230  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.166652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.666139  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.666209  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:12.166254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:12.166731  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:12.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.166445  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.166702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:14.166684  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.166770  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.167156  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:14.167223  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:14.666896  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.667255  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.167098  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.167589  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.666317  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.666392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:16.166898  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.166964  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.167280  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:16.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:16.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.667212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.667594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.166183  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.666227  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.666643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.166363  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.166741  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.666473  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.666544  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:18.666946  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:19.166811  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.166894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.167197  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:19.667052  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.667494  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.166251  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.666278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.666536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:21.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:21.166718  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:21.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.166154  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.166236  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.166525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.666654  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:23.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.166350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.166696  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:23.166758  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:23.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.166423  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.166514  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.166938  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.666591  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.666926  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.166167  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.666268  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:25.666738  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:26.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.166386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.166758  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:26.667125  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.667482  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.166187  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.166601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.666179  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.666248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:28.166873  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.166943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.167276  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:28.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:28.667149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.667219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.667624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.166678  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.167031  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.666202  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.166296  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.166722  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.666438  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.666516  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.666818  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:30.666863  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:31.167130  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.167203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.167472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:31.666847  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.666919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.167093  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.167163  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.167483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.666708  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.666786  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.667188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:32.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:33.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.167053  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.167388  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:33.666150  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.666225  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.666552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.166580  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:35.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.166672  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:35.166733  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:35.667033  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.667102  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.167161  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.166552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.666698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:37.666757  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:38.166422  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.166500  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:38.666194  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.666265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.167095  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.666974  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.667318  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:39.667375  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:40.167120  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.167543  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:40.666231  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.166425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.166750  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.666605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:42.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.166619  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:42.167094  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:42.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.666923  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.167057  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.167134  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.167398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.667173  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.667599  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.166501  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.166575  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.166892  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.666149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.666222  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.666488  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:44.666529  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:45.166301  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.166394  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.166815  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:45.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.666688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.166383  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.166726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.666288  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.666390  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.666823  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:46.666883  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:47.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:47.666906  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.666980  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.667259  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.167539  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:49.166560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.166634  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:49.166951  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:49.666759  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.666827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.667195  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.167180  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.167561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.166662  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.666376  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.666454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.666782  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:51.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:52.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.166277  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:52.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.666260  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.166242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.166586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:54.166666  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.166740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.167107  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:54.167169  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:54.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.667066  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.667453  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.166768  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.166843  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.167212  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.667075  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.667147  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.166196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.666907  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.666978  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.667341  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:56.667400  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:57.167105  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.167182  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.167548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:57.666151  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.666224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.666574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:59.166616  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.166687  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.167061  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:59.167133  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:59.666436  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.666763  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.166433  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.166775  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.666772  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.666864  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.667256  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.166511  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.166588  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.166874  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.666242  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.666312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.666652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:01.666713  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:02.166240  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:02.666821  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.667219  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.167019  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.167098  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.167404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.667108  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.667179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.667509  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:03.667571  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:04.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.166539  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:04.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.666387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.666456  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:06.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:06.166736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:06.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.166352  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.166429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.166638  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:08.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:09.166897  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.167350  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:09.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.667559  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.166198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.166610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:11.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.166812  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:11.166864  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:11.667095  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.667159  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.167205  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.167279  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.167635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.666734  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.166554  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.666237  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:13.666743  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:14.166756  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.166839  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.167224  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:14.666384  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.666452  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.666765  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.166506  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.166604  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.666880  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.666953  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.667301  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:15.667360  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:16.167103  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.167186  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.167467  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:16.666185  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.666259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.666581  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.166400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.166698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.666368  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.666435  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.666759  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:18.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.166336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:18.166712  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:18.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.666316  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.166992  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.666925  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.667275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:20.167102  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.167179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.167552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:20.167610  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.166282  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.166361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.166713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.666428  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.666878  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.166118  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.166189  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.166472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.666186  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.666263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:22.666636  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:23.166387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:23.666524  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.666616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.666974  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.166861  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.166944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.167295  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.667130  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.667205  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.667569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:24.667625  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:25.166285  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.166367  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.166640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:25.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.166431  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.166504  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.166839  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.666268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:27.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:27.166741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:27.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.166370  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.166448  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.666614  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:29.166581  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.166988  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:29.167064  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:29.666310  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.666379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.166344  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.666407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.666837  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.166591  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.666262  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.666700  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:31.666773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:32.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.166666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:32.666931  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.667021  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.167169  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.666354  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:34.166448  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.166521  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.166778  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:34.166817  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:34.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.166518  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.166928  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.666213  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.666489  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.166173  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.166587  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.666706  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:36.666759  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:37.166409  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.166748  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:37.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.666371  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.166380  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.666156  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.666498  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:39.166531  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.166607  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.166922  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:39.166975  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:39.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.666360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.166383  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.166661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.666295  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.666709  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.166407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.166482  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.166800  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.666481  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.666552  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.666826  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:41.666867  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:42.166504  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.166597  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.167020  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:42.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.166575  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.166655  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.166923  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.666265  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:44.166680  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.166751  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.167102  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:44.167158  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:44.666373  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.666442  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.666712  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.166323  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.166419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.166904  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:46.166971  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.167358  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:46.167415  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:46.667180  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.667573  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.166353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.166671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.666144  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.666220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.166246  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.166328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.666285  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.666616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:48.666674  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:49.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.166829  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.167114  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:49.666912  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.667008  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.667343  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.167265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.167597  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.667199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:50.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:51.167085  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.167158  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.167484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:51.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.666588  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.166288  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.166576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.666279  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:53.166266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.166682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:53.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:53.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.666538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.166529  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.167128  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.666895  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.666973  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.667337  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:55.167110  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.167191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.167497  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:55.167547  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:55.666230  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.666304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.166312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.666335  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.666403  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.666666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.166298  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.166382  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.166769  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.666462  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.666534  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.666859  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:57.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:58.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:58.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.167410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.666199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.666594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:00.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.166428  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:00.166812  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:00.666486  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.666567  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.666939  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.166699  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.166771  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.167072  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.666854  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.666927  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.667287  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:02.166943  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.167041  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.167384  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:02.167439  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:02.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.667496  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.166171  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.166536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.666647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.166621  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.166972  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.666792  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.666871  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.667225  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:04.667298  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:05.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.167164  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:05.666753  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.666818  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.166887  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.167288  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.667059  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:06.667427  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:07.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.166958  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:07.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.666703  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.166359  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.666293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:09.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.166681  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:09.167077  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:09.666840  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.666912  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.667238  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.166509  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.166582  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.166858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.166742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.666945  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.667031  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.667356  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:11.667420  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:12.167101  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:12.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.666600  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.166981  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.167068  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.667199  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.667286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.667642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:13.667698  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:14.166489  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.166888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:14.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.666551  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.166321  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.166657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.666366  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.666440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.666760  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:16.166141  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.166215  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.166468  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:16.166510  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:16.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.166374  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.166457  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.166761  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.666442  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.666512  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.666821  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:18.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.166772  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:18.166836  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:18.666274  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.166855  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.166933  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.167216  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.667039  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.667360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:20.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.167228  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.167569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:20.167623  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:20.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.666348  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.666615  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.166625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.166193  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.166517  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.666240  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.666319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.666635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:22.666694  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:23.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.166714  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:23.666152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.666229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.166488  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:24.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:25.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.166557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:25.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.666628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.166354  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.166432  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.166768  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.666459  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.666527  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.666814  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:26.666855  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:27.166267  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.166728  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:27.666682  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.666756  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.667083  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.166832  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.166910  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.167202  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.667022  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.667097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:28.667472  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:29.166585  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.166986  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:29.666238  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.666306  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.166349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.166687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.666416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.667129  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:31.166416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.166493  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:31.166799  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:31.666451  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.666540  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.666886  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.166679  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.167040  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.166343  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.166414  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.666470  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.666546  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:33.666954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:34.166602  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.166668  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.166925  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:34.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.666642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.166346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.166669  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.666238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:36.166255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:36.166744  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:36.666417  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.666492  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.666845  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.166502  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.166593  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.166951  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.666782  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.666857  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.667204  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:38.167040  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.167135  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.167508  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:38.167570  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:38.666773  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.666845  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.167094  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.167166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.167513  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.667211  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.667304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.667685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.166206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:40.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:41.166208  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.166634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:41.666331  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.666404  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.166257  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:42.666736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:43.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.166460  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.166745  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:43.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.166962  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.666626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:45.166336  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.166423  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.166767  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:45.166816  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:45.666819  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.666897  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.667261  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.166500  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.166583  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.166847  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:47.166414  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.166497  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:47.166838  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:47.666485  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.666557  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.666832  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.166343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.666684  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:49.166554  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.166635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.166960  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:49.167054  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:49.666877  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.666951  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.167131  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.167578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.666932  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.667019  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.667326  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:51.167186  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.167276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.167691  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:51.167754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:51.666431  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.666825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.166160  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.166241  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.166511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.166466  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.166825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.667187  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.667483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:53.667539  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:54.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.166598  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.166946  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:54.666794  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.666869  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.166549  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.166809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.666671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:56.166359  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.166777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:56.166834  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:56.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.166224  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.166303  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.166628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.166503  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:58.666661  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:59.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.167155  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:59.666449  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.666515  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.166309  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.166395  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.666575  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.666682  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.667068  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:00.667126  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:01.166853  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.167038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.167371  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:01.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.667265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.667601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.166238  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.166322  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.666979  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.667074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.667353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:02.667401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:03.167145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.167221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.167567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.666326  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.666639  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.166767  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.667023  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.667100  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.667434  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:04.667488  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:05.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.166259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.166604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:05.666866  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.666932  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.167087  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.167170  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.167507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.666273  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.666702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:07.166389  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.166454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.166729  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:07.166773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:07.666440  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.666529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.666861  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.166628  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.166712  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.167093  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.666822  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.666890  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.667183  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:09.167074  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.167152  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.167512  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:09.167567  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:09.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.666352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.666710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.166961  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.167396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.666160  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.166637  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.666393  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.666463  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.666766  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:11.666808  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:12.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.166645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:12.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.666717  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.166302  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.166710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.666374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:14.166633  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.166711  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.167091  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:14.167149  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:14.666871  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.666946  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.667269  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.167061  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.167138  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.167476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.666203  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.666281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.666622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.166164  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.166245  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.166507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.666216  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:17.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.166577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:17.666191  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.666256  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.666511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.166212  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.166315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.166633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.666248  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:19.166505  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.166576  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.166870  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:19.166918  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:19.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.666567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.166357  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.666369  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.666443  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.666785  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:21.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:22.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.166561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.166824  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:22.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.666368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.166281  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.166368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.166699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.666210  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.666283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:24.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.166660  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.167035  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:24.167111  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:24.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.667230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.166928  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.167024  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.667147  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.667223  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.667622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.166295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.666243  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.666504  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:26.666554  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:27.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.166660  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:27.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.166197  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.166524  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.666680  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:28.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:29.166765  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.166840  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.167165  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:29.666897  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.167174  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.167271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.167625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.666334  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.666419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.666807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:30.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:31.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.167536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:31.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.166351  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.666287  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.666548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:33.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:33.166706  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:33.666243  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.166799  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.666282  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.666375  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.666726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.166319  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.166392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:35.666568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:36.166250  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.166626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:36.666324  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.666401  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.666725  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.166908  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.166975  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.667118  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.667398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:37.667447  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:38.166151  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.166226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.166528  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:38.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.666633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.166754  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.167075  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.666637  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.666714  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.667049  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:40.166341  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.166420  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.166681  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:40.166728  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:40.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.666455  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.666787  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.666356  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.666429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:42.166327  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.166411  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.166822  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:42.166896  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:42.666589  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.666665  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.667015  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.166747  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.166812  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.167088  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.666863  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.666934  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.667289  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:44.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.166981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.167339  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:44.167397  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:44.666667  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.666740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.667046  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.166921  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.167029  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.666175  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.666253  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.666621  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.166254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.166514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:46.666754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:47.166451  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.166864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:47.667182  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.667255  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.667579  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.666341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:49.166748  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.166817  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:49.167250  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:49.666922  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.667010  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.166155  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.666900  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.667180  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:51.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.167345  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:51.167391  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:51.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.667233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.667577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.166264  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.666171  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.666249  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.666529  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:53.666576  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:54.166567  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.166645  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:54.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.667510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.166542  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:55.666707  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.166311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.166642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:56.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.666282  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.167073  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.167151  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.167546  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.666340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:57.666741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:58.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:58.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.666328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.666634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:44:00.169272  483106 type.go:168] "Request Body" body=""
	W1202 21:44:00.169401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 21:44:00.169464  483106 node_ready.go:38] duration metric: took 6m0.003439328s for node "functional-066896" to be "Ready" ...
	I1202 21:44:00.175124  483106 out.go:203] 
	W1202 21:44:00.178380  483106 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:44:00.178413  483106 out.go:285] * 
	W1202 21:44:00.180645  483106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:44:00.185151  483106 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436081037Z" level=info msg="Using the internal default seccomp profile"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436142986Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436202104Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436256571Z" level=info msg="RDT not available in the host system"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.436314524Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437068927Z" level=info msg="Conmon does support the --sync option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437154245Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437215505Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437845682Z" level=info msg="Conmon does support the --sync option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.437934142Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.4381156Z" level=info msg="Updated default CNI network name to "
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.438813142Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.439566183Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.439720425Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.47605174Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476242413Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476306643Z" level=info msg="Create NRI interface"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476428245Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476442554Z" level=info msg="runtime interface created"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476456298Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476463396Z" level=info msg="runtime interface starting up..."
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476469705Z" level=info msg="starting plugins..."
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.47648285Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:37:57 functional-066896 crio[6009]: time="2025-12-02T21:37:57.476550781Z" level=info msg="No systemd watchdog enabled"
	Dec 02 21:37:57 functional-066896 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:44:04.849684    9337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:04.850548    9337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:04.852213    9337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:04.852566    9337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:04.854103    9337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:44:04 up  3:26,  0 user,  load average: 0.37, 0.25, 0.50
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:44:02 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:02 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 02 21:44:02 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:02 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:03 functional-066896 kubelet[9213]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:03 functional-066896 kubelet[9213]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:03 functional-066896 kubelet[9213]: E1202 21:44:03.009748    9213 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:03 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:03 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:03 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 02 21:44:03 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:03 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:03 functional-066896 kubelet[9233]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:03 functional-066896 kubelet[9233]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:03 functional-066896 kubelet[9233]: E1202 21:44:03.718618    9233 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:03 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:03 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:04 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 02 21:44:04 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:04 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:04 functional-066896 kubelet[9254]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:04 functional-066896 kubelet[9254]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:04 functional-066896 kubelet[9254]: E1202 21:44:04.475613    9254 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:04 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:04 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (329.343674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 kubectl -- --context functional-066896 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 kubectl -- --context functional-066896 get pods: exit status 1 (109.818755ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-066896 kubectl -- --context functional-066896 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (310.316409ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 logs -n 25: (1.026585855s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-218190 image ls --format short --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image   │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete  │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start   │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start   │ -p functional-066896 --alsologtostderr -v=8                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:37 UTC │                     │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:latest                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add minikube-local-cache-test:functional-066896                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache delete minikube-local-cache-test:functional-066896                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl images                                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ cache   │ functional-066896 cache reload                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ kubectl │ functional-066896 kubectl -- --context functional-066896 get pods                                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:37:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:37:54.052280  483106 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:37:54.052518  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052549  483106 out.go:374] Setting ErrFile to fd 2...
	I1202 21:37:54.052570  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052830  483106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:37:54.053229  483106 out.go:368] Setting JSON to false
	I1202 21:37:54.054096  483106 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12002,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:37:54.054239  483106 start.go:143] virtualization:  
	I1202 21:37:54.055968  483106 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:37:54.057216  483106 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:37:54.057305  483106 notify.go:221] Checking for updates...
	I1202 21:37:54.059409  483106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:37:54.060390  483106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:54.061474  483106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:37:54.062609  483106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:37:54.063772  483106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:37:54.065317  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:54.065458  483106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:37:54.087852  483106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:37:54.087968  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.157300  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.14827719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.157407  483106 docker.go:319] overlay module found
	I1202 21:37:54.158855  483106 out.go:179] * Using the docker driver based on existing profile
	I1202 21:37:54.160356  483106 start.go:309] selected driver: docker
	I1202 21:37:54.160374  483106 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.160477  483106 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:37:54.160570  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.221500  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.212376823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.221914  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:54.221982  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:54.222036  483106 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.223816  483106 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:37:54.224907  483106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:37:54.226134  483106 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:37:54.227415  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:54.227490  483106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:37:54.247414  483106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:37:54.247439  483106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:37:54.295322  483106 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:37:54.500334  483106 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:37:54.500536  483106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:37:54.500574  483106 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500673  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:37:54.500684  483106 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.936µs
	I1202 21:37:54.500698  483106 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:37:54.500710  483106 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500741  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:37:54.500746  483106 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.194µs
	I1202 21:37:54.500752  483106 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500761  483106 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500788  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:37:54.500788  483106 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:37:54.500792  483106 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.492µs
	I1202 21:37:54.500799  483106 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500809  483106 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500816  483106 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500852  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:37:54.500856  483106 start.go:364] duration metric: took 26.462µs to acquireMachinesLock for "functional-066896"
	I1202 21:37:54.500858  483106 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.838µs
	I1202 21:37:54.500864  483106 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500869  483106 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:37:54.500875  483106 fix.go:54] fixHost starting: 
	I1202 21:37:54.500873  483106 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500901  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:37:54.500905  483106 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.15µs
	I1202 21:37:54.500919  483106 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500928  483106 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500951  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:37:54.500956  483106 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.833µs
	I1202 21:37:54.500961  483106 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:37:54.500970  483106 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500994  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:37:54.500998  483106 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.391µs
	I1202 21:37:54.501003  483106 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:37:54.501011  483106 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.501036  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:37:54.501040  483106 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.097µs
	I1202 21:37:54.501046  483106 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:37:54.501065  483106 cache.go:87] Successfully saved all images to host disk.
	I1202 21:37:54.501197  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:54.517471  483106 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:37:54.517510  483106 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:37:54.519079  483106 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:37:54.519117  483106 machine.go:94] provisionDockerMachine start ...
	I1202 21:37:54.519205  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.536086  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.536422  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.536437  483106 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:37:54.686523  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.686547  483106 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:37:54.686612  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.710674  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.710988  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.711037  483106 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:37:54.868253  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.868331  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.886749  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.887092  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.887115  483106 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:37:55.036431  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:37:55.036522  483106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:37:55.036593  483106 ubuntu.go:190] setting up certificates
	I1202 21:37:55.036621  483106 provision.go:84] configureAuth start
	I1202 21:37:55.036718  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:55.055483  483106 provision.go:143] copyHostCerts
	I1202 21:37:55.055534  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055575  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:37:55.055589  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055670  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:37:55.055775  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055797  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:37:55.055803  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055836  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:37:55.055880  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055901  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:37:55.055908  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055941  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:37:55.055998  483106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:37:55.445716  483106 provision.go:177] copyRemoteCerts
	I1202 21:37:55.445788  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:37:55.445829  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.462295  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:55.566646  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:37:55.566707  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:37:55.584230  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:37:55.584339  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:37:55.601138  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:37:55.601197  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:37:55.619092  483106 provision.go:87] duration metric: took 582.43702ms to configureAuth
	I1202 21:37:55.619117  483106 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:37:55.619308  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:55.619413  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.637231  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:55.637559  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:55.637573  483106 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:37:55.956144  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:37:55.956170  483106 machine.go:97] duration metric: took 1.437044454s to provisionDockerMachine
	I1202 21:37:55.956204  483106 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:37:55.956218  483106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:37:55.956294  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:37:55.956339  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.980756  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.091648  483106 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:37:56.095210  483106 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:37:56.095237  483106 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:37:56.095243  483106 command_runner.go:130] > VERSION_ID="12"
	I1202 21:37:56.095248  483106 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:37:56.095253  483106 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:37:56.095256  483106 command_runner.go:130] > ID=debian
	I1202 21:37:56.095270  483106 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:37:56.095275  483106 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:37:56.095281  483106 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:37:56.095363  483106 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:37:56.095385  483106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:37:56.095402  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:37:56.095457  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:37:56.095544  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:37:56.095557  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /etc/ssl/certs/4472112.pem
	I1202 21:37:56.095638  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:37:56.095647  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> /etc/test/nested/copy/447211/hosts
	I1202 21:37:56.095696  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:37:56.103392  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:56.120789  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:37:56.138613  483106 start.go:296] duration metric: took 182.392463ms for postStartSetup
	I1202 21:37:56.138692  483106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:37:56.138730  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.156335  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.255560  483106 command_runner.go:130] > 13%
	I1202 21:37:56.256083  483106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:37:56.260264  483106 command_runner.go:130] > 169G
	I1202 21:37:56.260703  483106 fix.go:56] duration metric: took 1.759824513s for fixHost
	I1202 21:37:56.260720  483106 start.go:83] releasing machines lock for "functional-066896", held for 1.759856579s
	I1202 21:37:56.260787  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:56.278034  483106 ssh_runner.go:195] Run: cat /version.json
	I1202 21:37:56.278057  483106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:37:56.278086  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.278126  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.294975  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.296343  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.394339  483106 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:37:56.394533  483106 ssh_runner.go:195] Run: systemctl --version
	I1202 21:37:56.493105  483106 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:37:56.493163  483106 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:37:56.493186  483106 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:37:56.493258  483106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:37:56.530464  483106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:37:56.534763  483106 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:37:56.534813  483106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:37:56.534914  483106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:37:56.542668  483106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:37:56.542693  483106 start.go:496] detecting cgroup driver to use...
	I1202 21:37:56.542754  483106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:37:56.542818  483106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:37:56.557769  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:37:56.570749  483106 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:37:56.570845  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:37:56.586179  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:37:56.599149  483106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:37:56.708191  483106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:37:56.842013  483106 docker.go:234] disabling docker service ...
	I1202 21:37:56.842082  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:37:56.857073  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:37:56.870370  483106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:37:56.987213  483106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:37:57.106635  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:37:57.119596  483106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:37:57.132314  483106 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 21:37:57.133557  483106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:37:57.133663  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.142404  483106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:37:57.142548  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.151265  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.160043  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.168450  483106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:37:57.177232  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.186240  483106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.194528  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.203498  483106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:37:57.209931  483106 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:37:57.210879  483106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:37:57.218360  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.328965  483106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:37:57.485223  483106 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:37:57.485296  483106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:37:57.489286  483106 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 21:37:57.489311  483106 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:37:57.489318  483106 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 21:37:57.489325  483106 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:57.489330  483106 command_runner.go:130] > Access: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489343  483106 command_runner.go:130] > Modify: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489348  483106 command_runner.go:130] > Change: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489352  483106 command_runner.go:130] >  Birth: -
	I1202 21:37:57.489576  483106 start.go:564] Will wait 60s for crictl version
	I1202 21:37:57.489633  483106 ssh_runner.go:195] Run: which crictl
	I1202 21:37:57.495444  483106 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:37:57.495541  483106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:37:57.522065  483106 command_runner.go:130] > Version:  0.1.0
	I1202 21:37:57.522330  483106 command_runner.go:130] > RuntimeName:  cri-o
	I1202 21:37:57.522612  483106 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 21:37:57.522814  483106 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:37:57.525085  483106 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:37:57.525167  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.560503  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.560529  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.560537  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.560542  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.560547  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.560551  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.560555  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.560560  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.560564  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.560568  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.560572  483106 command_runner.go:130] >      static
	I1202 21:37:57.560580  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.560584  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.560589  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.560595  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.560598  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.560603  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.560612  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.560616  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.560620  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.563007  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.589712  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.589787  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.589809  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.589825  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.589855  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.589880  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.589897  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.589914  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.589955  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.589975  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.589991  483106 command_runner.go:130] >      static
	I1202 21:37:57.590007  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.590023  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.590049  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.590069  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.590086  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.590103  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.590120  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.590146  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.590164  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.593809  483106 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:37:57.595025  483106 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:37:57.611773  483106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:37:57.615442  483106 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:37:57.615683  483106 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:37:57.615790  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:57.615841  483106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:37:57.645971  483106 command_runner.go:130] > {
	I1202 21:37:57.645994  483106 command_runner.go:130] >   "images":  [
	I1202 21:37:57.645998  483106 command_runner.go:130] >     {
	I1202 21:37:57.646007  483106 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:37:57.646011  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646017  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:37:57.646020  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646024  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646033  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 21:37:57.646036  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646041  483106 command_runner.go:130] >       "size":  "29035622",
	I1202 21:37:57.646045  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646049  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646052  483106 command_runner.go:130] >     },
	I1202 21:37:57.646054  483106 command_runner.go:130] >     {
	I1202 21:37:57.646060  483106 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:37:57.646068  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646074  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:37:57.646077  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646080  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646088  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 21:37:57.646096  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646101  483106 command_runner.go:130] >       "size":  "74488375",
	I1202 21:37:57.646105  483106 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:37:57.646109  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646112  483106 command_runner.go:130] >     },
	I1202 21:37:57.646115  483106 command_runner.go:130] >     {
	I1202 21:37:57.646121  483106 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:37:57.646124  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646129  483106 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:37:57.646132  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646136  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646147  483106 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 21:37:57.646150  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646157  483106 command_runner.go:130] >       "size":  "60854229",
	I1202 21:37:57.646161  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646165  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646168  483106 command_runner.go:130] >       },
	I1202 21:37:57.646172  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646175  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646178  483106 command_runner.go:130] >     },
	I1202 21:37:57.646181  483106 command_runner.go:130] >     {
	I1202 21:37:57.646187  483106 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:37:57.646191  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646196  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:37:57.646200  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646203  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646211  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 21:37:57.646216  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646220  483106 command_runner.go:130] >       "size":  "84947242",
	I1202 21:37:57.646223  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646227  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646230  483106 command_runner.go:130] >       },
	I1202 21:37:57.646234  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646238  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646241  483106 command_runner.go:130] >     },
	I1202 21:37:57.646243  483106 command_runner.go:130] >     {
	I1202 21:37:57.646250  483106 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:37:57.646253  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646259  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:37:57.646262  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646266  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646274  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 21:37:57.646277  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646285  483106 command_runner.go:130] >       "size":  "72167568",
	I1202 21:37:57.646289  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646292  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646299  483106 command_runner.go:130] >       },
	I1202 21:37:57.646305  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646309  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646313  483106 command_runner.go:130] >     },
	I1202 21:37:57.646316  483106 command_runner.go:130] >     {
	I1202 21:37:57.646322  483106 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:37:57.646326  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646331  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:37:57.646334  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646338  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646345  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 21:37:57.646348  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646352  483106 command_runner.go:130] >       "size":  "74105124",
	I1202 21:37:57.646356  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646360  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646363  483106 command_runner.go:130] >     },
	I1202 21:37:57.646365  483106 command_runner.go:130] >     {
	I1202 21:37:57.646372  483106 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:37:57.646375  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646381  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:37:57.646384  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646387  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646399  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 21:37:57.646403  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646406  483106 command_runner.go:130] >       "size":  "49819792",
	I1202 21:37:57.646409  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646413  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646416  483106 command_runner.go:130] >       },
	I1202 21:37:57.646421  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646424  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646427  483106 command_runner.go:130] >     },
	I1202 21:37:57.646430  483106 command_runner.go:130] >     {
	I1202 21:37:57.646436  483106 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:37:57.646443  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646447  483106 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.646450  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646454  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646461  483106 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 21:37:57.646464  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646468  483106 command_runner.go:130] >       "size":  "517328",
	I1202 21:37:57.646471  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646474  483106 command_runner.go:130] >         "value":  "65535"
	I1202 21:37:57.646477  483106 command_runner.go:130] >       },
	I1202 21:37:57.646481  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646485  483106 command_runner.go:130] >       "pinned":  true
	I1202 21:37:57.646488  483106 command_runner.go:130] >     }
	I1202 21:37:57.646491  483106 command_runner.go:130] >   ]
	I1202 21:37:57.646493  483106 command_runner.go:130] > }
	I1202 21:37:57.648114  483106 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:37:57.648141  483106 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:37:57.648149  483106 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:37:57.648254  483106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:37:57.648333  483106 ssh_runner.go:195] Run: crio config
	I1202 21:37:57.700265  483106 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 21:37:57.700298  483106 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 21:37:57.700306  483106 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 21:37:57.700310  483106 command_runner.go:130] > #
	I1202 21:37:57.700318  483106 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 21:37:57.700324  483106 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 21:37:57.700331  483106 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 21:37:57.700339  483106 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 21:37:57.700343  483106 command_runner.go:130] > # reload'.
	I1202 21:37:57.700350  483106 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 21:37:57.700357  483106 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 21:37:57.700363  483106 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 21:37:57.700373  483106 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 21:37:57.700376  483106 command_runner.go:130] > [crio]
	I1202 21:37:57.700387  483106 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 21:37:57.700395  483106 command_runner.go:130] > # containers images, in this directory.
	I1202 21:37:57.700407  483106 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 21:37:57.700421  483106 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 21:37:57.700427  483106 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 21:37:57.700434  483106 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 21:37:57.700447  483106 command_runner.go:130] > # imagestore = ""
	I1202 21:37:57.700456  483106 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 21:37:57.700462  483106 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 21:37:57.700469  483106 command_runner.go:130] > # storage_driver = "overlay"
	I1202 21:37:57.700475  483106 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 21:37:57.700484  483106 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 21:37:57.700488  483106 command_runner.go:130] > # storage_option = [
	I1202 21:37:57.700493  483106 command_runner.go:130] > # ]
	I1202 21:37:57.700499  483106 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 21:37:57.700508  483106 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 21:37:57.700513  483106 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 21:37:57.700520  483106 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 21:37:57.700528  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 21:37:57.700532  483106 command_runner.go:130] > # always happen on a node reboot
	I1202 21:37:57.700541  483106 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 21:37:57.700555  483106 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 21:37:57.700563  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 21:37:57.700568  483106 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 21:37:57.700573  483106 command_runner.go:130] > # version_file_persist = ""
	I1202 21:37:57.700587  483106 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 21:37:57.700595  483106 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 21:37:57.700603  483106 command_runner.go:130] > # internal_wipe = true
	I1202 21:37:57.700612  483106 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 21:37:57.700617  483106 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 21:37:57.700629  483106 command_runner.go:130] > # internal_repair = true
	I1202 21:37:57.700634  483106 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 21:37:57.700640  483106 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 21:37:57.700650  483106 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 21:37:57.700656  483106 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 21:37:57.700661  483106 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 21:37:57.700667  483106 command_runner.go:130] > [crio.api]
	I1202 21:37:57.700672  483106 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 21:37:57.700677  483106 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 21:37:57.700685  483106 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 21:37:57.700690  483106 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 21:37:57.700699  483106 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 21:37:57.700710  483106 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 21:37:57.700714  483106 command_runner.go:130] > # stream_port = "0"
	I1202 21:37:57.700720  483106 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 21:37:57.700725  483106 command_runner.go:130] > # stream_enable_tls = false
	I1202 21:37:57.700731  483106 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 21:37:57.700954  483106 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 21:37:57.700969  483106 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 21:37:57.700976  483106 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 21:37:57.700981  483106 command_runner.go:130] > # stream_tls_cert = ""
	I1202 21:37:57.700988  483106 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 21:37:57.700994  483106 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 21:37:57.701175  483106 command_runner.go:130] > # stream_tls_key = ""
	I1202 21:37:57.701188  483106 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 21:37:57.701195  483106 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 21:37:57.701200  483106 command_runner.go:130] > # automatically pick up the changes.
	I1202 21:37:57.701204  483106 command_runner.go:130] > # stream_tls_ca = ""
	I1202 21:37:57.701226  483106 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701255  483106 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 21:37:57.701272  483106 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701278  483106 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 21:37:57.701285  483106 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 21:37:57.701296  483106 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 21:37:57.701300  483106 command_runner.go:130] > [crio.runtime]
	I1202 21:37:57.701306  483106 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 21:37:57.701315  483106 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 21:37:57.701318  483106 command_runner.go:130] > # "nofile=1024:2048"
	I1202 21:37:57.701324  483106 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 21:37:57.701328  483106 command_runner.go:130] > # default_ulimits = [
	I1202 21:37:57.701331  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701338  483106 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 21:37:57.701348  483106 command_runner.go:130] > # no_pivot = false
	I1202 21:37:57.701354  483106 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 21:37:57.701360  483106 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 21:37:57.701368  483106 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 21:37:57.701374  483106 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 21:37:57.701385  483106 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 21:37:57.701395  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701399  483106 command_runner.go:130] > # conmon = ""
	I1202 21:37:57.701403  483106 command_runner.go:130] > # Cgroup setting for conmon
	I1202 21:37:57.701410  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 21:37:57.701414  483106 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 21:37:57.701420  483106 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 21:37:57.701425  483106 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 21:37:57.701432  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701438  483106 command_runner.go:130] > # conmon_env = [
	I1202 21:37:57.701441  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701447  483106 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 21:37:57.701459  483106 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 21:37:57.701465  483106 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 21:37:57.701470  483106 command_runner.go:130] > # default_env = [
	I1202 21:37:57.701475  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701481  483106 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 21:37:57.701491  483106 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 21:37:57.701495  483106 command_runner.go:130] > # selinux = false
	I1202 21:37:57.701501  483106 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 21:37:57.701509  483106 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 21:37:57.701516  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701526  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.701533  483106 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 21:37:57.701541  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701545  483106 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 21:37:57.701551  483106 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 21:37:57.701559  483106 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 21:37:57.701566  483106 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 21:37:57.701575  483106 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 21:37:57.701580  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701584  483106 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 21:37:57.701590  483106 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 21:37:57.701595  483106 command_runner.go:130] > # the cgroup blockio controller.
	I1202 21:37:57.701601  483106 command_runner.go:130] > # blockio_config_file = ""
	I1202 21:37:57.701608  483106 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 21:37:57.701614  483106 command_runner.go:130] > # blockio parameters.
	I1202 21:37:57.701618  483106 command_runner.go:130] > # blockio_reload = false
	I1202 21:37:57.701625  483106 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 21:37:57.701628  483106 command_runner.go:130] > # irqbalance daemon.
	I1202 21:37:57.701634  483106 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 21:37:57.701642  483106 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 21:37:57.701649  483106 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 21:37:57.701659  483106 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 21:37:57.701689  483106 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 21:37:57.701703  483106 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 21:37:57.701707  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701711  483106 command_runner.go:130] > # rdt_config_file = ""
	I1202 21:37:57.701717  483106 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 21:37:57.701723  483106 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 21:37:57.701730  483106 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 21:37:57.701736  483106 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 21:37:57.701742  483106 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 21:37:57.701751  483106 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 21:37:57.701755  483106 command_runner.go:130] > # will be added.
	I1202 21:37:57.701763  483106 command_runner.go:130] > # default_capabilities = [
	I1202 21:37:57.701968  483106 command_runner.go:130] > # 	"CHOWN",
	I1202 21:37:57.702017  483106 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 21:37:57.702029  483106 command_runner.go:130] > # 	"FSETID",
	I1202 21:37:57.702033  483106 command_runner.go:130] > # 	"FOWNER",
	I1202 21:37:57.702037  483106 command_runner.go:130] > # 	"SETGID",
	I1202 21:37:57.702040  483106 command_runner.go:130] > # 	"SETUID",
	I1202 21:37:57.702175  483106 command_runner.go:130] > # 	"SETPCAP",
	I1202 21:37:57.702197  483106 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 21:37:57.702202  483106 command_runner.go:130] > # 	"KILL",
	I1202 21:37:57.702205  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702213  483106 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 21:37:57.702220  483106 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 21:37:57.702225  483106 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 21:37:57.702232  483106 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 21:37:57.702247  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702251  483106 command_runner.go:130] > default_sysctls = [
	I1202 21:37:57.702282  483106 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 21:37:57.702290  483106 command_runner.go:130] > ]
	I1202 21:37:57.702302  483106 command_runner.go:130] > # List of devices on the host that a
	I1202 21:37:57.702309  483106 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 21:37:57.702317  483106 command_runner.go:130] > # allowed_devices = [
	I1202 21:37:57.702321  483106 command_runner.go:130] > # 	"/dev/fuse",
	I1202 21:37:57.702326  483106 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 21:37:57.702496  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702509  483106 command_runner.go:130] > # List of additional devices. specified as
	I1202 21:37:57.702523  483106 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 21:37:57.702529  483106 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 21:37:57.702539  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702546  483106 command_runner.go:130] > # additional_devices = [
	I1202 21:37:57.702553  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702559  483106 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 21:37:57.702562  483106 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 21:37:57.702593  483106 command_runner.go:130] > # 	"/etc/cdi",
	I1202 21:37:57.702605  483106 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 21:37:57.702609  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702616  483106 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 21:37:57.702632  483106 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 21:37:57.702636  483106 command_runner.go:130] > # Defaults to false.
	I1202 21:37:57.702641  483106 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 21:37:57.702647  483106 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 21:37:57.702655  483106 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 21:37:57.702659  483106 command_runner.go:130] > # hooks_dir = [
	I1202 21:37:57.702849  483106 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 21:37:57.702860  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702867  483106 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 21:37:57.702879  483106 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 21:37:57.702886  483106 command_runner.go:130] > # its default mounts from the following two files:
	I1202 21:37:57.702893  483106 command_runner.go:130] > #
	I1202 21:37:57.702899  483106 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 21:37:57.702905  483106 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 21:37:57.702911  483106 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 21:37:57.702913  483106 command_runner.go:130] > #
	I1202 21:37:57.702919  483106 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 21:37:57.702925  483106 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 21:37:57.702932  483106 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 21:37:57.702937  483106 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 21:37:57.702942  483106 command_runner.go:130] > #
	I1202 21:37:57.702974  483106 command_runner.go:130] > # default_mounts_file = ""
	I1202 21:37:57.702983  483106 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 21:37:57.702990  483106 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 21:37:57.703009  483106 command_runner.go:130] > # pids_limit = -1
	I1202 21:37:57.703018  483106 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 21:37:57.703024  483106 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 21:37:57.703030  483106 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 21:37:57.703039  483106 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 21:37:57.703043  483106 command_runner.go:130] > # log_size_max = -1
	I1202 21:37:57.703053  483106 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 21:37:57.703070  483106 command_runner.go:130] > # log_to_journald = false
	I1202 21:37:57.703082  483106 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 21:37:57.703090  483106 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 21:37:57.703102  483106 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 21:37:57.703112  483106 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 21:37:57.703121  483106 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 21:37:57.703294  483106 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 21:37:57.703314  483106 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 21:37:57.703388  483106 command_runner.go:130] > # read_only = false
	I1202 21:37:57.703403  483106 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 21:37:57.703410  483106 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 21:37:57.703414  483106 command_runner.go:130] > # live configuration reload.
	I1202 21:37:57.703418  483106 command_runner.go:130] > # log_level = "info"
	I1202 21:37:57.703429  483106 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 21:37:57.703434  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.703441  483106 command_runner.go:130] > # log_filter = ""
	I1202 21:37:57.703448  483106 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703456  483106 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 21:37:57.703459  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703467  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703471  483106 command_runner.go:130] > # uid_mappings = ""
	I1202 21:37:57.703477  483106 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703489  483106 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 21:37:57.703492  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703500  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703504  483106 command_runner.go:130] > # gid_mappings = ""
	I1202 21:37:57.703510  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 21:37:57.703518  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703524  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703532  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703561  483106 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 21:37:57.703582  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 21:37:57.703590  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703596  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703606  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703769  483106 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 21:37:57.703787  483106 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 21:37:57.703803  483106 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 21:37:57.703810  483106 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 21:37:57.703970  483106 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 21:37:57.703985  483106 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 21:37:57.703996  483106 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 21:37:57.704002  483106 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 21:37:57.704010  483106 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 21:37:57.704013  483106 command_runner.go:130] > # drop_infra_ctr = true
	I1202 21:37:57.704023  483106 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 21:37:57.704035  483106 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 21:37:57.704043  483106 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 21:37:57.704046  483106 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 21:37:57.704053  483106 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 21:37:57.704059  483106 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 21:37:57.704066  483106 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 21:37:57.704073  483106 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 21:37:57.704077  483106 command_runner.go:130] > # shared_cpuset = ""
	I1202 21:37:57.704088  483106 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 21:37:57.704094  483106 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 21:37:57.704098  483106 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 21:37:57.704111  483106 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 21:37:57.704115  483106 command_runner.go:130] > # pinns_path = ""
	I1202 21:37:57.704126  483106 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 21:37:57.704133  483106 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 21:37:57.704159  483106 command_runner.go:130] > # enable_criu_support = true
	I1202 21:37:57.704170  483106 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 21:37:57.704177  483106 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 21:37:57.704281  483106 command_runner.go:130] > # enable_pod_events = false
	I1202 21:37:57.704302  483106 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 21:37:57.704308  483106 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 21:37:57.704428  483106 command_runner.go:130] > # default_runtime = "crun"
	I1202 21:37:57.704441  483106 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 21:37:57.704455  483106 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 21:37:57.704470  483106 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 21:37:57.704476  483106 command_runner.go:130] > # creation as a file is not desired either.
	I1202 21:37:57.704485  483106 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 21:37:57.704501  483106 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 21:37:57.704506  483106 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 21:37:57.704638  483106 command_runner.go:130] > # ]
	I1202 21:37:57.704649  483106 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 21:37:57.704656  483106 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 21:37:57.704663  483106 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 21:37:57.704668  483106 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 21:37:57.704671  483106 command_runner.go:130] > #
	I1202 21:37:57.704676  483106 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 21:37:57.704681  483106 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 21:37:57.704688  483106 command_runner.go:130] > # runtime_type = "oci"
	I1202 21:37:57.704693  483106 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 21:37:57.704697  483106 command_runner.go:130] > # inherit_default_runtime = false
	I1202 21:37:57.704710  483106 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 21:37:57.704715  483106 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 21:37:57.704720  483106 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 21:37:57.704728  483106 command_runner.go:130] > # monitor_env = []
	I1202 21:37:57.704733  483106 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 21:37:57.704737  483106 command_runner.go:130] > # allowed_annotations = []
	I1202 21:37:57.704743  483106 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 21:37:57.704749  483106 command_runner.go:130] > # no_sync_log = false
	I1202 21:37:57.704753  483106 command_runner.go:130] > # default_annotations = {}
	I1202 21:37:57.704757  483106 command_runner.go:130] > # stream_websockets = false
	I1202 21:37:57.704761  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.704791  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.704803  483106 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 21:37:57.704810  483106 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 21:37:57.704816  483106 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 21:37:57.704822  483106 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 21:37:57.704828  483106 command_runner.go:130] > #   in $PATH.
	I1202 21:37:57.704835  483106 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 21:37:57.704844  483106 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 21:37:57.704850  483106 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 21:37:57.704853  483106 command_runner.go:130] > #   state.
	I1202 21:37:57.704859  483106 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 21:37:57.704870  483106 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 21:37:57.704879  483106 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 21:37:57.704885  483106 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 21:37:57.704891  483106 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 21:37:57.704899  483106 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 21:37:57.704907  483106 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 21:37:57.704917  483106 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 21:37:57.704923  483106 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 21:37:57.704931  483106 command_runner.go:130] > #   The currently recognized values are:
	I1202 21:37:57.704940  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 21:37:57.704947  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 21:37:57.704954  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 21:37:57.704962  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 21:37:57.704969  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 21:37:57.704978  483106 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 21:37:57.704985  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 21:37:57.704992  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 21:37:57.705001  483106 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 21:37:57.705008  483106 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 21:37:57.705017  483106 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 21:37:57.705023  483106 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 21:37:57.705029  483106 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 21:37:57.705035  483106 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 21:37:57.705045  483106 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 21:37:57.705054  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 21:37:57.705068  483106 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 21:37:57.705072  483106 command_runner.go:130] > #   deprecated option "conmon".
	I1202 21:37:57.705080  483106 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 21:37:57.705088  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 21:37:57.705095  483106 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 21:37:57.705101  483106 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 21:37:57.705108  483106 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 21:37:57.705113  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 21:37:57.705129  483106 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 21:37:57.705135  483106 command_runner.go:130] > #   conmon-rs by using:
	I1202 21:37:57.705143  483106 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 21:37:57.705154  483106 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 21:37:57.705165  483106 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 21:37:57.705176  483106 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 21:37:57.705183  483106 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 21:37:57.705191  483106 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 21:37:57.705198  483106 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 21:37:57.705203  483106 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 21:37:57.705214  483106 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 21:37:57.705222  483106 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 21:37:57.705228  483106 command_runner.go:130] > #   when a machine crash happens.
	I1202 21:37:57.705235  483106 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 21:37:57.705243  483106 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 21:37:57.705253  483106 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 21:37:57.705257  483106 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 21:37:57.705263  483106 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 21:37:57.705273  483106 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 21:37:57.705275  483106 command_runner.go:130] > #
	I1202 21:37:57.705280  483106 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 21:37:57.705285  483106 command_runner.go:130] > #
	I1202 21:37:57.705292  483106 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 21:37:57.705301  483106 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 21:37:57.705304  483106 command_runner.go:130] > #
	I1202 21:37:57.705310  483106 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 21:37:57.705317  483106 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 21:37:57.705322  483106 command_runner.go:130] > #
	I1202 21:37:57.705328  483106 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 21:37:57.705331  483106 command_runner.go:130] > # feature.
	I1202 21:37:57.705336  483106 command_runner.go:130] > #
	I1202 21:37:57.705342  483106 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 21:37:57.705350  483106 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 21:37:57.705360  483106 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 21:37:57.705367  483106 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 21:37:57.705375  483106 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 21:37:57.705382  483106 command_runner.go:130] > #
	I1202 21:37:57.705388  483106 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 21:37:57.705397  483106 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 21:37:57.705399  483106 command_runner.go:130] > #
	I1202 21:37:57.705405  483106 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 21:37:57.705411  483106 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 21:37:57.705416  483106 command_runner.go:130] > #
	I1202 21:37:57.705422  483106 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 21:37:57.705428  483106 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 21:37:57.705433  483106 command_runner.go:130] > # limitation.
	I1202 21:37:57.705469  483106 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 21:37:57.705480  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 21:37:57.705484  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705488  483106 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 21:37:57.705492  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705499  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705503  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705510  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705514  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705518  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705521  483106 command_runner.go:130] > allowed_annotations = [
	I1202 21:37:57.705734  483106 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 21:37:57.705745  483106 command_runner.go:130] > ]
	I1202 21:37:57.705770  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705779  483106 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 21:37:57.705849  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 21:37:57.705872  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705883  483106 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 21:37:57.705901  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705906  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705910  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705915  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705921  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705925  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705929  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705937  483106 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 21:37:57.705944  483106 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 21:37:57.705965  483106 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 21:37:57.705974  483106 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 21:37:57.705985  483106 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 21:37:57.706000  483106 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 21:37:57.706009  483106 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 21:37:57.706015  483106 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 21:37:57.706025  483106 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 21:37:57.706051  483106 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 21:37:57.706057  483106 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 21:37:57.706077  483106 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 21:37:57.706082  483106 command_runner.go:130] > # Example:
	I1202 21:37:57.706087  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 21:37:57.706091  483106 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 21:37:57.706096  483106 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 21:37:57.706102  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 21:37:57.706105  483106 command_runner.go:130] > # cpuset = "0-1"
	I1202 21:37:57.706108  483106 command_runner.go:130] > # cpushares = "5"
	I1202 21:37:57.706112  483106 command_runner.go:130] > # cpuquota = "1000"
	I1202 21:37:57.706116  483106 command_runner.go:130] > # cpuperiod = "100000"
	I1202 21:37:57.706120  483106 command_runner.go:130] > # cpulimit = "35"
	I1202 21:37:57.706126  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.706131  483106 command_runner.go:130] > # The workload name is workload-type.
	I1202 21:37:57.706143  483106 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 21:37:57.706160  483106 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 21:37:57.706180  483106 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 21:37:57.706189  483106 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 21:37:57.706195  483106 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 21:37:57.706229  483106 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 21:37:57.706243  483106 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 21:37:57.706247  483106 command_runner.go:130] > # Default value is set to true
	I1202 21:37:57.706253  483106 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 21:37:57.706261  483106 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 21:37:57.706266  483106 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 21:37:57.706271  483106 command_runner.go:130] > # Default value is set to 'false'
	I1202 21:37:57.706275  483106 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 21:37:57.706280  483106 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 21:37:57.706291  483106 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 21:37:57.706299  483106 command_runner.go:130] > # timezone = ""
	I1202 21:37:57.706306  483106 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 21:37:57.706308  483106 command_runner.go:130] > #
	I1202 21:37:57.706315  483106 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 21:37:57.706326  483106 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 21:37:57.706329  483106 command_runner.go:130] > [crio.image]
	I1202 21:37:57.706338  483106 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 21:37:57.706348  483106 command_runner.go:130] > # default_transport = "docker://"
	I1202 21:37:57.706354  483106 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 21:37:57.706360  483106 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706497  483106 command_runner.go:130] > # global_auth_file = ""
	I1202 21:37:57.706512  483106 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 21:37:57.706518  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706617  483106 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.706659  483106 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 21:37:57.706671  483106 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706677  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706682  483106 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 21:37:57.706688  483106 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 21:37:57.706698  483106 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 21:37:57.706714  483106 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 21:37:57.706730  483106 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 21:37:57.706734  483106 command_runner.go:130] > # pause_command = "/pause"
	I1202 21:37:57.706749  483106 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 21:37:57.706756  483106 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 21:37:57.706771  483106 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 21:37:57.706777  483106 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 21:37:57.706783  483106 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 21:37:57.706791  483106 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 21:37:57.706795  483106 command_runner.go:130] > # pinned_images = [
	I1202 21:37:57.706798  483106 command_runner.go:130] > # ]
	I1202 21:37:57.706806  483106 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 21:37:57.706813  483106 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 21:37:57.706822  483106 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 21:37:57.706828  483106 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 21:37:57.706834  483106 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 21:37:57.707022  483106 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 21:37:57.707046  483106 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 21:37:57.707056  483106 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 21:37:57.707066  483106 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 21:37:57.707073  483106 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 21:37:57.707084  483106 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 21:37:57.707105  483106 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 21:37:57.707129  483106 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 21:37:57.707141  483106 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 21:37:57.707146  483106 command_runner.go:130] > # changing them here.
	I1202 21:37:57.707158  483106 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 21:37:57.707163  483106 command_runner.go:130] > # insecure_registries = [
	I1202 21:37:57.707278  483106 command_runner.go:130] > # ]
	I1202 21:37:57.707303  483106 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 21:37:57.707309  483106 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 21:37:57.707323  483106 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 21:37:57.707334  483106 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 21:37:57.707518  483106 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 21:37:57.707543  483106 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 21:37:57.707551  483106 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 21:37:57.707565  483106 command_runner.go:130] > # auto_reload_registries = false
	I1202 21:37:57.707577  483106 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 21:37:57.707586  483106 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 21:37:57.707593  483106 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 21:37:57.707601  483106 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 21:37:57.707626  483106 command_runner.go:130] > # The mode of short name resolution.
	I1202 21:37:57.707639  483106 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 21:37:57.707646  483106 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 21:37:57.707652  483106 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 21:37:57.707737  483106 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 21:37:57.707776  483106 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 21:37:57.707797  483106 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 21:37:57.707804  483106 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 21:37:57.707810  483106 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 21:37:57.707814  483106 command_runner.go:130] > # CNI plugins.
	I1202 21:37:57.707818  483106 command_runner.go:130] > [crio.network]
	I1202 21:37:57.707825  483106 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 21:37:57.707834  483106 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 21:37:57.707838  483106 command_runner.go:130] > # cni_default_network = ""
	I1202 21:37:57.707843  483106 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 21:37:57.707880  483106 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 21:37:57.707894  483106 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 21:37:57.707898  483106 command_runner.go:130] > # plugin_dirs = [
	I1202 21:37:57.708100  483106 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 21:37:57.708328  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708337  483106 command_runner.go:130] > # List of included pod metrics.
	I1202 21:37:57.708504  483106 command_runner.go:130] > # included_pod_metrics = [
	I1202 21:37:57.708692  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708716  483106 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 21:37:57.708721  483106 command_runner.go:130] > [crio.metrics]
	I1202 21:37:57.708725  483106 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 21:37:57.709042  483106 command_runner.go:130] > # enable_metrics = false
	I1202 21:37:57.709050  483106 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 21:37:57.709056  483106 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 21:37:57.709063  483106 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 21:37:57.709070  483106 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 21:37:57.709082  483106 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 21:37:57.709226  483106 command_runner.go:130] > # metrics_collectors = [
	I1202 21:37:57.709424  483106 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 21:37:57.709616  483106 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 21:37:57.709807  483106 command_runner.go:130] > # 	"containers_oom_total",
	I1202 21:37:57.709999  483106 command_runner.go:130] > # 	"processes_defunct",
	I1202 21:37:57.710186  483106 command_runner.go:130] > # 	"operations_total",
	I1202 21:37:57.710377  483106 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 21:37:57.710569  483106 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 21:37:57.710759  483106 command_runner.go:130] > # 	"operations_errors_total",
	I1202 21:37:57.710953  483106 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 21:37:57.711154  483106 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 21:37:57.711347  483106 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 21:37:57.711541  483106 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 21:37:57.711734  483106 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 21:37:57.711929  483106 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 21:37:57.712114  483106 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 21:37:57.712326  483106 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 21:37:57.712521  483106 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 21:37:57.712708  483106 command_runner.go:130] > # ]
	I1202 21:37:57.712718  483106 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 21:37:57.713101  483106 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 21:37:57.713111  483106 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 21:37:57.713462  483106 command_runner.go:130] > # metrics_port = 9090
	I1202 21:37:57.713472  483106 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 21:37:57.713766  483106 command_runner.go:130] > # metrics_socket = ""
	I1202 21:37:57.713798  483106 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 21:37:57.713843  483106 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 21:37:57.713867  483106 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 21:37:57.713890  483106 command_runner.go:130] > # certificate on any modification event.
	I1202 21:37:57.714026  483106 command_runner.go:130] > # metrics_cert = ""
	I1202 21:37:57.714049  483106 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 21:37:57.714055  483106 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 21:37:57.714333  483106 command_runner.go:130] > # metrics_key = ""
	I1202 21:37:57.714367  483106 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 21:37:57.714411  483106 command_runner.go:130] > [crio.tracing]
	I1202 21:37:57.714434  483106 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 21:37:57.714690  483106 command_runner.go:130] > # enable_tracing = false
	I1202 21:37:57.714730  483106 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 21:37:57.715040  483106 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 21:37:57.715074  483106 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 21:37:57.715400  483106 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 21:37:57.715424  483106 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 21:37:57.715465  483106 command_runner.go:130] > [crio.nri]
	I1202 21:37:57.715486  483106 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 21:37:57.715706  483106 command_runner.go:130] > # enable_nri = true
	I1202 21:37:57.715731  483106 command_runner.go:130] > # NRI socket to listen on.
	I1202 21:37:57.716042  483106 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 21:37:57.716072  483106 command_runner.go:130] > # NRI plugin directory to use.
	I1202 21:37:57.716381  483106 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 21:37:57.716412  483106 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 21:37:57.716702  483106 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 21:37:57.716734  483106 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 21:37:57.716910  483106 command_runner.go:130] > # nri_disable_connections = false
	I1202 21:37:57.716983  483106 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 21:37:57.717007  483106 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 21:37:57.717025  483106 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 21:37:57.717040  483106 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 21:37:57.717084  483106 command_runner.go:130] > # NRI default validator configuration.
	I1202 21:37:57.717109  483106 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 21:37:57.717127  483106 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 21:37:57.717180  483106 command_runner.go:130] > # can be restricted/rejected:
	I1202 21:37:57.717207  483106 command_runner.go:130] > # - OCI hook injection
	I1202 21:37:57.717238  483106 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 21:37:57.717387  483106 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 21:37:57.717408  483106 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 21:37:57.717448  483106 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 21:37:57.717469  483106 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 21:37:57.717489  483106 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 21:37:57.717520  483106 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 21:37:57.717542  483106 command_runner.go:130] > #
	I1202 21:37:57.717559  483106 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 21:37:57.717588  483106 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 21:37:57.717614  483106 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 21:37:57.717634  483106 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 21:37:57.717673  483106 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 21:37:57.717700  483106 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 21:37:57.717721  483106 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 21:37:57.717750  483106 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 21:37:57.717775  483106 command_runner.go:130] > # ]
	I1202 21:37:57.717791  483106 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 21:37:57.717809  483106 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 21:37:57.717844  483106 command_runner.go:130] > [crio.stats]
	I1202 21:37:57.717862  483106 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 21:37:57.717880  483106 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 21:37:57.717896  483106 command_runner.go:130] > # stats_collection_period = 0
	I1202 21:37:57.717933  483106 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 21:37:57.717955  483106 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 21:37:57.717969  483106 command_runner.go:130] > # collection_period = 0
	I1202 21:37:57.719581  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.679996811Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 21:37:57.719602  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680035195Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 21:37:57.719612  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680068245Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 21:37:57.719634  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680094978Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 21:37:57.719650  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680175192Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.719661  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680551245Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 21:37:57.719673  483106 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 21:37:57.719793  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:57.719806  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:57.719822  483106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:37:57.719854  483106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:37:57.719977  483106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:37:57.720050  483106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:37:57.727128  483106 command_runner.go:130] > kubeadm
	I1202 21:37:57.727200  483106 command_runner.go:130] > kubectl
	I1202 21:37:57.727217  483106 command_runner.go:130] > kubelet
	I1202 21:37:57.727679  483106 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:37:57.727758  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:37:57.735128  483106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:37:57.747401  483106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:37:57.759635  483106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 21:37:57.772168  483106 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:37:57.775704  483106 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:37:57.775781  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.892482  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:58.414394  483106 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:37:58.414415  483106 certs.go:195] generating shared ca certs ...
	I1202 21:37:58.414431  483106 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:58.414617  483106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:37:58.414690  483106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:37:58.414702  483106 certs.go:257] generating profile certs ...
	I1202 21:37:58.414822  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:37:58.414884  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:37:58.414927  483106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:37:58.414939  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:37:58.414953  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:37:58.414964  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:37:58.414980  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:37:58.414991  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:37:58.415019  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:37:58.415030  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:37:58.415042  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:37:58.415094  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:37:58.415127  483106 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:37:58.415140  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:37:58.415171  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:37:58.415199  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:37:58.415223  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:37:58.415279  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:58.415327  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.415344  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem -> /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.415358  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.415948  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:37:58.434575  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:37:58.454217  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:37:58.476636  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:37:58.499852  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:37:58.517799  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:37:58.537626  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:37:58.556051  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:37:58.573621  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:37:58.591561  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:37:58.609240  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:37:58.626214  483106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:37:58.638898  483106 ssh_runner.go:195] Run: openssl version
	I1202 21:37:58.644941  483106 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:37:58.645379  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:37:58.653758  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657242  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657279  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657350  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.697450  483106 command_runner.go:130] > b5213941
	I1202 21:37:58.697880  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:37:58.705830  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:37:58.714550  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718238  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718320  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718390  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.760939  483106 command_runner.go:130] > 51391683
	I1202 21:37:58.761409  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:37:58.769112  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:37:58.777300  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780878  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780914  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780988  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.821311  483106 command_runner.go:130] > 3ec20f2e
	I1202 21:37:58.821773  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:37:58.829482  483106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833099  483106 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833249  483106 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:37:58.833277  483106 command_runner.go:130] > Device: 259,1	Inode: 1309045     Links: 1
	I1202 21:37:58.833296  483106 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:58.833318  483106 command_runner.go:130] > Access: 2025-12-02 21:33:51.106313964 +0000
	I1202 21:37:58.833335  483106 command_runner.go:130] > Modify: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833354  483106 command_runner.go:130] > Change: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833368  483106 command_runner.go:130] >  Birth: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833452  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:37:58.873701  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.874162  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:37:58.914810  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.915281  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:37:58.957479  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.957884  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:37:58.998366  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.998755  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:37:59.041919  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.042032  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:37:59.082406  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.082849  483106 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:59.082947  483106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:37:59.083063  483106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:37:59.109816  483106 cri.go:89] found id: ""
	I1202 21:37:59.109903  483106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:37:59.116871  483106 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:37:59.116937  483106 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:37:59.116958  483106 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:37:59.117791  483106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:37:59.117835  483106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:37:59.117913  483106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:37:59.125060  483106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:37:59.125506  483106 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.125617  483106 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-444114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-066896" cluster setting kubeconfig missing "functional-066896" context setting]
	I1202 21:37:59.125900  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.126337  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.126509  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.127095  483106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:37:59.127116  483106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:37:59.127122  483106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:37:59.127127  483106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:37:59.127133  483106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:37:59.127170  483106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:37:59.127484  483106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:37:59.134957  483106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:37:59.134991  483106 kubeadm.go:602] duration metric: took 17.137902ms to restartPrimaryControlPlane
	I1202 21:37:59.135014  483106 kubeadm.go:403] duration metric: took 52.172876ms to StartCluster
	I1202 21:37:59.135029  483106 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135086  483106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.135727  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135915  483106 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:37:59.136175  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:59.136232  483106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:37:59.136325  483106 addons.go:70] Setting storage-provisioner=true in profile "functional-066896"
	I1202 21:37:59.136339  483106 addons.go:239] Setting addon storage-provisioner=true in "functional-066896"
	I1202 21:37:59.136375  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.136437  483106 addons.go:70] Setting default-storageclass=true in profile "functional-066896"
	I1202 21:37:59.136458  483106 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-066896"
	I1202 21:37:59.136761  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.136798  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.139277  483106 out.go:179] * Verifying Kubernetes components...
	I1202 21:37:59.140771  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:59.165976  483106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:37:59.168845  483106 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.168870  483106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:37:59.168937  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.175656  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.176018  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.176385  483106 addons.go:239] Setting addon default-storageclass=true in "functional-066896"
	I1202 21:37:59.176428  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.176909  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.211203  483106 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:37:59.211229  483106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:37:59.211311  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.225207  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.248989  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.349954  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:59.407494  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.408663  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.165713  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165766  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165797  483106 retry.go:31] will retry after 202.822033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165873  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165889  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165899  483106 retry.go:31] will retry after 281.773783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.166009  483106 node_ready.go:35] waiting up to 6m0s for node "functional-066896" to be "Ready" ...
	I1202 21:38:00.166135  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.166200  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.368900  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.441989  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.442041  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.442063  483106 retry.go:31] will retry after 393.334545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.448331  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.512520  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.512571  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.512592  483106 retry.go:31] will retry after 493.57139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.666814  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.667270  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.835693  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.896509  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.896567  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.896588  483106 retry.go:31] will retry after 517.359335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.006926  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.069882  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.069952  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.069980  483106 retry.go:31] will retry after 823.867865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.167068  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.167622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.415018  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:01.473591  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.473646  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.473665  483106 retry.go:31] will retry after 817.290744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.666990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.667103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.894929  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.964144  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.967581  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.967615  483106 retry.go:31] will retry after 586.961084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.167465  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:02.167512  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:02.292000  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:02.348780  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.352211  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.352246  483106 retry.go:31] will retry after 1.098539896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.555610  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:02.616881  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.616985  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.617011  483106 retry.go:31] will retry after 1.090026315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.667191  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.667272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.667575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.451026  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:03.515404  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.515439  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.515458  483106 retry.go:31] will retry after 2.58724354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.666944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.667328  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.707632  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:03.776872  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.776924  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.776953  483106 retry.go:31] will retry after 972.290717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.166626  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.166706  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:04.666777  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.666867  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.667243  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:04.667303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:04.749460  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:04.810694  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:04.810734  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.810752  483106 retry.go:31] will retry after 3.951899284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:05.166161  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.166235  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.166558  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:05.666140  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.666212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.102988  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:06.161220  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:06.161263  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.161284  483106 retry.go:31] will retry after 3.838527337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.166366  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.166444  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.666314  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.666386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:07.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.166299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:07.166671  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:07.666338  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.666425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.666777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.166503  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.166606  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.166933  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.666295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.666603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.763053  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:08.821648  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:08.821701  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:08.821721  483106 retry.go:31] will retry after 4.430309202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:09.166538  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.166615  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.166964  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:09.167037  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:09.666806  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.666904  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.667263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.001423  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:10.065960  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:10.069561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.069595  483106 retry.go:31] will retry after 4.835447081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.166750  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.166827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.167127  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.666978  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.667076  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.667385  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:11.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.167266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.167557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:11.167608  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:11.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.666586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.166242  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.167025  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.167092  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.167359  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.252779  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:13.311539  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:13.314561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.314593  483106 retry.go:31] will retry after 7.77807994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.667097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.667178  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.667555  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:13.667614  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:14.166435  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.166532  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.166857  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.666157  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.666230  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.906038  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:14.963486  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:14.966545  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:14.966583  483106 retry.go:31] will retry after 9.105443561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:15.166926  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.167018  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.167368  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:15.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.666221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.666564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:16.166892  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.167321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:16.167385  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:16.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.667311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.667666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.166271  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.166345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.166811  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.666246  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.666576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.166665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.666398  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.666474  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.666809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:18.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:19.167020  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.167103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.167423  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:19.666169  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.666247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.166216  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.166296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.166641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.666328  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.666687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:21.093408  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:21.149979  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:21.153644  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.153677  483106 retry.go:31] will retry after 11.903983297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.166790  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.167199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:21.167253  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:21.666923  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.667013  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.667352  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.166588  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.166661  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.166957  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.666921  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.667250  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:23.167035  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.167114  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.167459  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:23.167514  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:23.666741  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.666815  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.072876  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:24.134664  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:24.134721  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.134742  483106 retry.go:31] will retry after 11.08333461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.166922  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.166990  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.167311  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.667038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.667366  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.167335  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.667220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.667299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.667607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:25.667651  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:26.166305  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.166387  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.166780  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:26.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.666584  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.166286  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.166358  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.666223  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.666297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.666627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:28.166866  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.166938  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:28.167314  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:28.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.667185  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.166534  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.166912  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.666220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.666294  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.666610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.166321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.166409  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:30.666751  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:31.166158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.166232  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.166500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:31.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.666300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.166362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.666462  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:32.666785  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:33.058732  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:33.133401  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:33.133437  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.133456  483106 retry.go:31] will retry after 7.836153133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.166617  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.167044  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:33.666857  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.666928  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.166841  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.166919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.666992  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.667107  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.667433  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:34.667486  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:35.166145  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.166224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.166561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:35.218798  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:35.277107  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:35.277160  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.277179  483106 retry.go:31] will retry after 18.212486347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.666236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.666575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.166236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.166653  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.666345  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.666418  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.666776  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:37.166874  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.166942  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.167236  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:37.167279  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:37.667058  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.667144  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.167192  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.167270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.167629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.166835  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.166911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.167230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.667062  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.667449  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:39.667503  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:40.166787  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.666946  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.667046  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.667374  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.969813  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:41.027522  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:41.030695  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.030727  483106 retry.go:31] will retry after 26.445141412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.167017  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.167086  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.167412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:41.667158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.667226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.667538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:41.667593  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:42.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:42.666412  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.666487  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.666864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.167082  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.167382  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.667222  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.667290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.667605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:43.667663  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:44.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.167048  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:44.666563  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.666635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.666906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.166291  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.666557  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.666637  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.666980  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:46.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.166248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.166526  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:46.166568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:46.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.666372  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.166454  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.166529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.166849  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.667114  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.667196  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.667500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:48.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.166278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.166598  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:48.166644  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:48.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.166918  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.166985  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.167265  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.667124  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:50.167148  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:50.167600  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:50.666859  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.666941  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.667348  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.166149  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.666321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.666742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.167091  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.167502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.666630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:52.666682  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:53.166365  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.166440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.166743  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:53.490393  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:53.549126  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:53.552379  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.552413  483106 retry.go:31] will retry after 28.270272942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.666480  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.166899  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.166977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.167310  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.667106  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.667183  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.667452  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:54.667501  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:55.166711  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.166784  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.167096  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:55.666915  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.666986  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.667321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.167141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.167212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.167527  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:57.166258  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:57.166735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:57.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.167097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.167360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.667123  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.667203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.667560  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:59.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.166930  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:59.166985  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:59.666233  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.666305  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.166345  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.166424  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.166735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.666605  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.666696  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.667071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:01.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.166920  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.167258  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:01.167303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:01.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.667514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.166308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.666901  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.666977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.667267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:03.167047  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.167126  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.167463  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:03.167519  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:03.667138  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.667208  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.667536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.166363  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.166711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.666264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.666699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.166480  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.166807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.666607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:05.666654  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:06.166221  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:06.666253  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.666658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.166933  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.167016  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.167275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.476950  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:07.537734  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:07.540988  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.541021  483106 retry.go:31] will retry after 43.142584555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.666246  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:07.666721  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:08.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.166806  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:08.666497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.666831  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.167081  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.167424  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.666233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:10.166170  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.166240  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.166510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:10.166560  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:10.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.166219  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.166624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.666147  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.666484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:12.166223  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.166293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.166617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:12.166680  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:12.666258  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.167106  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.167177  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.167479  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.666184  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.666262  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:14.166400  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.166473  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:14.166879  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:14.666975  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.667061  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.667380  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.167173  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.167254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.167549  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.666659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.166211  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.166592  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.666667  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:17.166399  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.166790  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:17.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.166356  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.166694  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.666433  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.666858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:18.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:19.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.167267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:19.667092  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.667166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.667486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.166275  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.166627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.666762  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.666831  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.667148  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:20.667207  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:21.166923  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.167030  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.167353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.667178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.667576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.822959  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:39:21.878670  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878722  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878822  483106 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:22.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.167188  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:22.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.666649  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:23.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.166385  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.166692  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:23.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:23.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.166668  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.166744  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.167080  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.666918  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.667347  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:25.166732  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.166798  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.167094  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:25.167141  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:25.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.167051  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.167153  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.167485  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.666270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.166562  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.666353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:27.666775  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:28.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.166268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:28.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.666620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.166566  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.166638  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.166966  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.666571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:30.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:30.166748  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:30.666468  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.666548  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.666896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.166188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.166269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.166537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:32.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.166483  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.166797  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:32.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:32.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.666570  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.166360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.666426  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.666501  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.666838  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:34.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.166641  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.166906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:34.166954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:34.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.166396  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.667133  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.667396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:36.167160  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.167234  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.167571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:36.167629  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:36.666296  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.666373  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.167008  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.167074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.167365  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.667188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.667263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.667557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.166608  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.666617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:38.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:39.166799  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.166866  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.167214  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:39.666873  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.666945  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.166544  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.666645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:40.666705  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:41.166392  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.166467  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:41.667109  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.667193  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.667456  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.166704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.666430  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.666507  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.666850  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:42.666912  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:43.166126  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.166198  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.166502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:43.666218  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.166582  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.166676  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.167019  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.666769  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.666837  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:44.667165  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:45.167137  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.167219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.167616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:45.666336  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.666407  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.666753  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.166918  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.666991  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.667084  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.667426  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:46.667487  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:47.166178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.166572  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:47.666176  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.666257  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.666519  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.166237  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.166320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.666685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:49.166761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.167141  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:49.167190  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:49.667042  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.667119  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.667437  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.166247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.666738  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.666823  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.667106  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.684445  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:50.752913  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.752959  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.753053  483106 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:50.754872  483106 out.go:179] * Enabled addons: 
	I1202 21:39:50.756298  483106 addons.go:530] duration metric: took 1m51.620061888s for enable addons: enabled=[]
	I1202 21:39:51.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.166426  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.166756  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:51.666472  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.666542  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:51.666948  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:52.167023  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.167094  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:52.666886  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.666958  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.667302  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.167134  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.167525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.667191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.667443  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:53.667482  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:54.166580  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.166653  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:54.666761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.666832  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.667157  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.166643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:56.166424  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.166496  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:56.166886  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:56.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.166233  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.166658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.666359  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.666436  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.666730  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.166410  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.166495  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.166819  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.666570  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.666669  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:58.667176  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:59.166497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.166577  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:59.667069  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.667455  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.166967  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.666805  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.666883  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.667412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:00.667479  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:01.166600  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.166671  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.167071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:01.666865  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.666943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.667324  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.167126  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.167206  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.167585  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.666525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:03.166226  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.166298  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.166603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:03.166657  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:03.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.166563  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:05.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.166503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.166802  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:05.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:05.666560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.666632  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.666917  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.166784  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.166862  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.167188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.666980  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.667073  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.667410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:07.167168  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.167242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.167577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:07.167637  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:07.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.166347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.666848  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.666917  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.667201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.167192  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.167533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:09.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:10.166218  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.166297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.166630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:10.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.166230  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.166652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.666139  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.666209  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:12.166254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:12.166731  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:12.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.166445  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.166702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:14.166684  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.166770  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.167156  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:14.167223  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:14.666896  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.667255  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.167098  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.167589  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.666317  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.666392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:16.166898  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.166964  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.167280  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:16.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:16.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.667212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.667594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.166183  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.666227  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.666643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.166363  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.166741  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.666473  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.666544  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:18.666946  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:19.166811  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.166894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.167197  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:19.667052  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.667494  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.166251  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.666278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.666536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:21.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:21.166718  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:21.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.166154  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.166236  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.166525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.666654  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:23.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.166350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.166696  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:23.166758  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:23.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.166423  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.166514  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.166938  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.666591  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.666926  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.166167  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.666268  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:25.666738  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:26.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.166386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.166758  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:26.667125  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.667482  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.166187  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.166601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.666179  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.666248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:28.166873  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.166943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.167276  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:28.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:28.667149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.667219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.667624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.166678  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.167031  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.666202  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.166296  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.166722  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.666438  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.666516  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.666818  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:30.666863  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:31.167130  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.167203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.167472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:31.666847  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.666919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.167093  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.167163  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.167483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.666708  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.666786  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.667188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:32.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:33.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.167053  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.167388  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:33.666150  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.666225  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.666552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.166580  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:35.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.166672  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:35.166733  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:35.667033  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.667102  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.167161  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.166552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.666698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:37.666757  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:38.166422  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.166500  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:38.666194  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.666265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.167095  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.666974  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.667318  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:39.667375  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:40.167120  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.167543  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:40.666231  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.166425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.166750  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.666605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:42.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.166619  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:42.167094  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:42.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.666923  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.167057  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.167134  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.167398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.667173  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.667599  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.166501  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.166575  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.166892  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.666149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.666222  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.666488  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:44.666529  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:45.166301  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.166394  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.166815  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:45.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.666688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.166383  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.166726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.666288  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.666390  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.666823  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:46.666883  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:47.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:47.666906  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.666980  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.667259  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.167539  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:49.166560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.166634  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:49.166951  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:49.666759  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.666827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.667195  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.167180  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.167561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.166662  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.666376  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.666454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.666782  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:51.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:52.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.166277  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:52.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.666260  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.166242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.166586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:54.166666  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.166740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.167107  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:54.167169  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:54.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.667066  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.667453  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.166768  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.166843  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.167212  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.667075  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.667147  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.166196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.666907  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.666978  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.667341  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:56.667400  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:57.167105  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.167182  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.167548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:57.666151  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.666224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.666574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:59.166616  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.166687  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.167061  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:59.167133  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:59.666436  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.666763  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.166433  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.166775  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.666772  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.666864  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.667256  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.166511  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.166588  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.166874  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.666242  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.666312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.666652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:01.666713  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:02.166240  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:02.666821  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.667219  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.167019  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.167098  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.167404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.667108  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.667179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.667509  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:03.667571  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:04.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.166539  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:04.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.666387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.666456  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:06.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:06.166736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:06.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.166352  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.166429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.166638  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:08.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:09.166897  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.167350  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:09.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.667559  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.166198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.166610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:11.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.166812  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:11.166864  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:11.667095  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.667159  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.167205  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.167279  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.167635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.666734  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.166554  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.666237  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:13.666743  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:14.166756  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.166839  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.167224  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:14.666384  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.666452  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.666765  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.166506  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.166604  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.666880  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.666953  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.667301  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:15.667360  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:16.167103  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.167186  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.167467  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:16.666185  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.666259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.666581  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.166400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.166698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.666368  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.666435  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.666759  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:18.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.166336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:18.166712  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:18.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.666316  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.166992  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.666925  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.667275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:20.167102  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.167179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.167552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:20.167610  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.166282  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.166361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.166713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.666428  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.666878  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.166118  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.166189  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.166472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.666186  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.666263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:22.666636  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:23.166387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:23.666524  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.666616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.666974  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.166861  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.166944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.167295  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.667130  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.667205  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.667569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:24.667625  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:25.166285  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.166367  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.166640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:25.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.166431  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.166504  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.166839  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.666268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:27.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:27.166741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:27.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.166370  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.166448  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.666614  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:29.166581  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.166988  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:29.167064  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:29.666310  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.666379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.166344  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.666407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.666837  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.166591  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.666262  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.666700  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:31.666773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:32.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.166666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:32.666931  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.667021  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.167169  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.666354  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:34.166448  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.166521  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.166778  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:34.166817  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:34.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.166518  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.166928  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.666213  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.666489  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.166173  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.166587  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.666706  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:36.666759  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:37.166409  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.166748  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:37.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.666371  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.166380  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.666156  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.666498  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:39.166531  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.166607  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.166922  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:39.166975  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:39.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.666360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.166383  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.166661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.666295  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.666709  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.166407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.166482  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.166800  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.666481  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.666552  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.666826  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:41.666867  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:42.166504  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.166597  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.167020  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:42.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.166575  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.166655  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.166923  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.666265  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:44.166680  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.166751  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.167102  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:44.167158  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:44.666373  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.666442  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.666712  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.166323  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.166419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.166904  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:46.166971  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.167358  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:46.167415  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:46.667180  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.667573  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.166353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.166671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.666144  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.666220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.166246  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.166328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.666285  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.666616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:48.666674  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:49.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.166829  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.167114  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:49.666912  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.667008  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.667343  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.167265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.167597  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.667199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:50.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:51.167085  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.167158  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.167484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:51.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.666588  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.166288  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.166576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.666279  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:53.166266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.166682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:53.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:53.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.666538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.166529  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.167128  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.666895  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.666973  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.667337  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:55.167110  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.167191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.167497  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:55.167547  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:55.666230  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.666304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.166312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.666335  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.666403  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.666666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.166298  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.166382  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.166769  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.666462  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.666534  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.666859  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:57.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:58.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:58.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.167410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.666199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.666594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:00.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.166428  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:00.166812  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:00.666486  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.666567  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.666939  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.166699  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.166771  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.167072  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.666854  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.666927  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.667287  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:02.166943  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.167041  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.167384  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:02.167439  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:02.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.667496  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.166171  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.166536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.666647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.166621  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.166972  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.666792  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.666871  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.667225  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:04.667298  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:05.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.167164  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:05.666753  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.666818  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.166887  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.167288  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.667059  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:06.667427  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:07.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.166958  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:07.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.666703  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.166359  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.666293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:09.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.166681  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:09.167077  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:09.666840  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.666912  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.667238  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.166509  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.166582  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.166858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.166742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.666945  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.667031  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.667356  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:11.667420  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:12.167101  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:12.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.666600  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.166981  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.167068  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.667199  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.667286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.667642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:13.667698  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:14.166489  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.166888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:14.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.666551  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.166321  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.166657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.666366  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.666440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.666760  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:16.166141  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.166215  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.166468  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:16.166510  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:16.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.166374  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.166457  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.166761  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.666442  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.666512  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.666821  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:18.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.166772  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:18.166836  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:18.666274  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.166855  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.166933  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.167216  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.667039  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.667360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:20.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.167228  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.167569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:20.167623  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:20.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.666348  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.666615  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.166625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.166193  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.166517  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.666240  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.666319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.666635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:22.666694  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:23.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.166714  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:23.666152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.666229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.166488  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:24.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:25.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.166557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:25.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.666628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.166354  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.166432  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.166768  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.666459  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.666527  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.666814  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:26.666855  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:27.166267  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.166728  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:27.666682  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.666756  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.667083  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.166832  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.166910  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.167202  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.667022  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.667097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:28.667472  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:29.166585  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.166986  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:29.666238  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.666306  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.166349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.166687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.666416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.667129  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:31.166416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.166493  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:31.166799  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:31.666451  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.666540  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.666886  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.166679  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.167040  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.166343  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.166414  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.666470  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.666546  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:33.666954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:34.166602  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.166668  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.166925  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:34.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.666642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.166346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.166669  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.666238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:36.166255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:36.166744  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:36.666417  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.666492  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.666845  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.166502  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.166593  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.166951  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.666782  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.666857  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.667204  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:38.167040  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.167135  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.167508  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:38.167570  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:38.666773  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.666845  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.167094  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.167166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.167513  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.667211  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.667304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.667685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.166206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:40.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:41.166208  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.166634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:41.666331  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.666404  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.166257  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:42.666736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:43.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.166460  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.166745  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:43.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.166962  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.666626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:45.166336  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.166423  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.166767  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:45.166816  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:45.666819  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.666897  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.667261  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.166500  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.166583  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.166847  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:47.166414  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.166497  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:47.166838  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:47.666485  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.666557  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.666832  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.166343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.666684  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:49.166554  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.166635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.166960  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:49.167054  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:49.666877  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.666951  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.167131  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.167578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.666932  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.667019  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.667326  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:51.167186  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.167276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.167691  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:51.167754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:51.666431  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.666825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.166160  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.166241  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.166511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.166466  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.166825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.667187  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.667483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:53.667539  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:54.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.166598  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.166946  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:54.666794  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.666869  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.166549  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.166809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.666671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:56.166359  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.166777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:56.166834  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:56.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.166224  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.166303  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.166628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.166503  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:58.666661  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:59.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.167155  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:59.666449  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.666515  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.166309  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.166395  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.666575  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.666682  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.667068  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:00.667126  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:01.166853  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.167038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.167371  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:01.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.667265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.667601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.166238  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.166322  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.666979  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.667074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.667353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:02.667401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:03.167145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.167221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.167567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.666326  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.666639  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.166767  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.667023  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.667100  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.667434  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:04.667488  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:05.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.166259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.166604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:05.666866  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.666932  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.167087  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.167170  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.167507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.666273  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.666702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:07.166389  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.166454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.166729  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:07.166773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:07.666440  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.666529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.666861  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.166628  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.166712  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.167093  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.666822  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.666890  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.667183  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:09.167074  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.167152  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.167512  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:09.167567  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:09.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.666352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.666710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.166961  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.167396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.666160  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.166637  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.666393  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.666463  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.666766  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:11.666808  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:12.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.166645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:12.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.666717  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.166302  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.166710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.666374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:14.166633  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.166711  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.167091  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:14.167149  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:14.666871  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.666946  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.667269  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.167061  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.167138  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.167476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.666203  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.666281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.666622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.166164  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.166245  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.166507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.666216  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:17.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.166577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:17.666191  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.666256  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.666511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.166212  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.166315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.166633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.666248  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:19.166505  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.166576  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.166870  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:19.166918  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:19.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.666567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.166357  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.666369  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.666443  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.666785  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:21.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:22.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.166561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.166824  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:22.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.666368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.166281  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.166368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.166699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.666210  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.666283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:24.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.166660  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.167035  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:24.167111  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:24.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.667230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.166928  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.167024  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.667147  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.667223  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.667622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.166295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.666243  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.666504  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:26.666554  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:27.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.166660  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:27.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.166197  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.166524  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.666680  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:28.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:29.166765  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.166840  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.167165  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:29.666897  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.167174  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.167271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.167625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.666334  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.666419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.666807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:30.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:31.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.167536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:31.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.166351  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.666287  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.666548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:33.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:33.166706  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:33.666243  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.166799  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.666282  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.666375  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.666726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.166319  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.166392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:35.666568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:36.166250  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.166626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:36.666324  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.666401  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.666725  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.166908  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.166975  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.667118  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.667398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:37.667447  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:38.166151  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.166226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.166528  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:38.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.666633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.166754  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.167075  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.666637  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.666714  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.667049  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:40.166341  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.166420  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.166681  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:40.166728  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:40.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.666455  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.666787  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.666356  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.666429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:42.166327  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.166411  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.166822  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:42.166896  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:42.666589  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.666665  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.667015  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.166747  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.166812  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.167088  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.666863  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.666934  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.667289  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:44.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.166981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.167339  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:44.167397  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:44.666667  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.666740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.667046  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.166921  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.167029  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.666175  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.666253  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.666621  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.166254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.166514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:46.666754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:47.166451  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.166864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:47.667182  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.667255  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.667579  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.666341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:49.166748  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.166817  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:49.167250  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:49.666922  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.667010  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.166155  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.666900  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.667180  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:51.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.167345  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:51.167391  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:51.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.667233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.667577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.166264  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.666171  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.666249  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.666529  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:53.666576  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:54.166567  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.166645  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:54.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.667510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.166542  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:55.666707  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.166311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.166642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:56.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.666282  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.167073  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.167151  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.167546  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.666340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:57.666741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:58.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:58.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.666328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.666634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:44:00.169272  483106 type.go:168] "Request Body" body=""
	W1202 21:44:00.169401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 21:44:00.169464  483106 node_ready.go:38] duration metric: took 6m0.003439328s for node "functional-066896" to be "Ready" ...
	I1202 21:44:00.175124  483106 out.go:203] 
	W1202 21:44:00.178380  483106 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:44:00.178413  483106 out.go:285] * 
	W1202 21:44:00.180645  483106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:44:00.185151  483106 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.116158755Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=d6fd777b-1bb1-431e-9591-d4dc00e55d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.14108786Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2d7899ab-0792-488f-996b-e0a6c3e572ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.14124499Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2d7899ab-0792-488f-996b-e0a6c3e572ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.141297528Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2d7899ab-0792-488f-996b-e0a6c3e572ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.221592581Z" level=info msg="Checking image status: minikube-local-cache-test:functional-066896" id=af739e94-7318-459c-9400-e955cd157d81 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.244103008Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-066896" id=0cc2ed60-4726-4935-8c0e-4dc57d5842b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.244243505Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-066896 not found" id=0cc2ed60-4726-4935-8c0e-4dc57d5842b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.244284301Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-066896 found" id=0cc2ed60-4726-4935-8c0e-4dc57d5842b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.268139675Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-066896" id=65775498-df63-4d94-ba7a-92f31f974251 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.269264377Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-066896 not found" id=65775498-df63-4d94-ba7a-92f31f974251 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.269313346Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-066896 found" id=65775498-df63-4d94-ba7a-92f31f974251 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.0837311Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b3ab13c0-493e-44ab-baec-d0bff455f6aa name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.449999322Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7b013f7c-b914-41bd-ae4c-b6cde7cba10e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.450135126Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7b013f7c-b914-41bd-ae4c-b6cde7cba10e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.450170647Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7b013f7c-b914-41bd-ae4c-b6cde7cba10e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.02245815Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=165e6e8a-1493-488b-ae7c-7f0b491f4718 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.022600412Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=165e6e8a-1493-488b-ae7c-7f0b491f4718 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.022639624Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=165e6e8a-1493-488b-ae7c-7f0b491f4718 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.046927288Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5d358ad2-dbf8-483c-ba3f-3c2d28c998b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.047195328Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5d358ad2-dbf8-483c-ba3f-3c2d28c998b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.047237231Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5d358ad2-dbf8-483c-ba3f-3c2d28c998b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.072792262Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=06b87679-5259-4610-91e5-18f8083af0a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.073142829Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=06b87679-5259-4610-91e5-18f8083af0a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.073226111Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=06b87679-5259-4610-91e5-18f8083af0a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.634324701Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ff51799f-eb0a-4ede-80e4-d668c6b158e4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:44:14.151710    9965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:14.152409    9965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:14.154030    9965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:14.154592    9965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:14.156080    9965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:44:14 up  3:26,  0 user,  load average: 0.62, 0.31, 0.52
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:44:11 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:12 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 02 21:44:12 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:12 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:12 functional-066896 kubelet[9861]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:12 functional-066896 kubelet[9861]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:12 functional-066896 kubelet[9861]: E1202 21:44:12.707950    9861 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:12 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:12 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:13 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 02 21:44:13 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:13 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:13 functional-066896 kubelet[9882]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:13 functional-066896 kubelet[9882]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:13 functional-066896 kubelet[9882]: E1202 21:44:13.487173    9882 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:13 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:13 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:14 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 02 21:44:14 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:14 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:14 functional-066896 kubelet[9970]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:14 functional-066896 kubelet[9970]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:14 functional-066896 kubelet[9970]: E1202 21:44:14.223199    9970 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:14 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:14 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (386.982282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-066896 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-066896 get pods: exit status 1 (105.545129ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-066896 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (321.855196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 logs -n 25: (1.018662163s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-218190 image ls --format short --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image   │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete  │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start   │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start   │ -p functional-066896 --alsologtostderr -v=8                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:37 UTC │                     │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:latest                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add minikube-local-cache-test:functional-066896                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache delete minikube-local-cache-test:functional-066896                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl images                                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ cache   │ functional-066896 cache reload                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ kubectl │ functional-066896 kubectl -- --context functional-066896 get pods                                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:37:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:37:54.052280  483106 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:37:54.052518  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052549  483106 out.go:374] Setting ErrFile to fd 2...
	I1202 21:37:54.052570  483106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:37:54.052830  483106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:37:54.053229  483106 out.go:368] Setting JSON to false
	I1202 21:37:54.054096  483106 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12002,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:37:54.054239  483106 start.go:143] virtualization:  
	I1202 21:37:54.055968  483106 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:37:54.057216  483106 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:37:54.057305  483106 notify.go:221] Checking for updates...
	I1202 21:37:54.059409  483106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:37:54.060390  483106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:54.061474  483106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:37:54.062609  483106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:37:54.063772  483106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:37:54.065317  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:54.065458  483106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:37:54.087852  483106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:37:54.087968  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.157300  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.14827719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.157407  483106 docker.go:319] overlay module found
	I1202 21:37:54.158855  483106 out.go:179] * Using the docker driver based on existing profile
	I1202 21:37:54.160356  483106 start.go:309] selected driver: docker
	I1202 21:37:54.160374  483106 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.160477  483106 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:37:54.160570  483106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:37:54.221500  483106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:37:54.212376823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:37:54.221914  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:54.221982  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:54.222036  483106 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:54.223816  483106 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:37:54.224907  483106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:37:54.226134  483106 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:37:54.227415  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:54.227490  483106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:37:54.247414  483106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:37:54.247439  483106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:37:54.295322  483106 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:37:54.500334  483106 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:37:54.500536  483106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:37:54.500574  483106 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500673  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:37:54.500684  483106 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.936µs
	I1202 21:37:54.500698  483106 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:37:54.500710  483106 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500741  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:37:54.500746  483106 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.194µs
	I1202 21:37:54.500752  483106 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500761  483106 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500788  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:37:54.500788  483106 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:37:54.500792  483106 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.492µs
	I1202 21:37:54.500799  483106 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500809  483106 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500816  483106 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500852  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:37:54.500856  483106 start.go:364] duration metric: took 26.462µs to acquireMachinesLock for "functional-066896"
	I1202 21:37:54.500858  483106 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.838µs
	I1202 21:37:54.500864  483106 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500869  483106 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:37:54.500875  483106 fix.go:54] fixHost starting: 
	I1202 21:37:54.500873  483106 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500901  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:37:54.500905  483106 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.15µs
	I1202 21:37:54.500919  483106 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:37:54.500928  483106 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500951  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:37:54.500956  483106 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.833µs
	I1202 21:37:54.500961  483106 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:37:54.500970  483106 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.500994  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:37:54.500998  483106 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.391µs
	I1202 21:37:54.501003  483106 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:37:54.501011  483106 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:37:54.501036  483106 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:37:54.501040  483106 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.097µs
	I1202 21:37:54.501046  483106 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:37:54.501065  483106 cache.go:87] Successfully saved all images to host disk.
	I1202 21:37:54.501197  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:54.517471  483106 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:37:54.517510  483106 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:37:54.519079  483106 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:37:54.519117  483106 machine.go:94] provisionDockerMachine start ...
	I1202 21:37:54.519205  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.536086  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.536422  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.536437  483106 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:37:54.686523  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.686547  483106 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:37:54.686612  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.710674  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.710988  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.711037  483106 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:37:54.868253  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:37:54.868331  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:54.886749  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:54.887092  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:54.887115  483106 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:37:55.036431  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:37:55.036522  483106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:37:55.036593  483106 ubuntu.go:190] setting up certificates
	I1202 21:37:55.036621  483106 provision.go:84] configureAuth start
	I1202 21:37:55.036718  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:55.055483  483106 provision.go:143] copyHostCerts
	I1202 21:37:55.055534  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055575  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:37:55.055589  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:37:55.055670  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:37:55.055775  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055797  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:37:55.055803  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:37:55.055836  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:37:55.055880  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055901  483106 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:37:55.055908  483106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:37:55.055941  483106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:37:55.055998  483106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:37:55.445716  483106 provision.go:177] copyRemoteCerts
	I1202 21:37:55.445788  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:37:55.445829  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.462295  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:55.566646  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:37:55.566707  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:37:55.584230  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:37:55.584339  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:37:55.601138  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:37:55.601197  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:37:55.619092  483106 provision.go:87] duration metric: took 582.43702ms to configureAuth
	I1202 21:37:55.619117  483106 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:37:55.619308  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:55.619413  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.637231  483106 main.go:143] libmachine: Using SSH client type: native
	I1202 21:37:55.637559  483106 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:37:55.637573  483106 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:37:55.956144  483106 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:37:55.956170  483106 machine.go:97] duration metric: took 1.437044454s to provisionDockerMachine
	I1202 21:37:55.956204  483106 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:37:55.956218  483106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:37:55.956294  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:37:55.956339  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:55.980756  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.091648  483106 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:37:56.095210  483106 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:37:56.095237  483106 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:37:56.095243  483106 command_runner.go:130] > VERSION_ID="12"
	I1202 21:37:56.095248  483106 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:37:56.095253  483106 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:37:56.095256  483106 command_runner.go:130] > ID=debian
	I1202 21:37:56.095270  483106 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:37:56.095275  483106 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:37:56.095281  483106 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:37:56.095363  483106 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:37:56.095385  483106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:37:56.095402  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:37:56.095457  483106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:37:56.095544  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:37:56.095557  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /etc/ssl/certs/4472112.pem
	I1202 21:37:56.095638  483106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:37:56.095647  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> /etc/test/nested/copy/447211/hosts
	I1202 21:37:56.095696  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:37:56.103392  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:56.120789  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:37:56.138613  483106 start.go:296] duration metric: took 182.392463ms for postStartSetup
	I1202 21:37:56.138692  483106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:37:56.138730  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.156335  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.255560  483106 command_runner.go:130] > 13%
	I1202 21:37:56.256083  483106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:37:56.260264  483106 command_runner.go:130] > 169G
	I1202 21:37:56.260703  483106 fix.go:56] duration metric: took 1.759824513s for fixHost
	I1202 21:37:56.260720  483106 start.go:83] releasing machines lock for "functional-066896", held for 1.759856579s
	I1202 21:37:56.260787  483106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:37:56.278034  483106 ssh_runner.go:195] Run: cat /version.json
	I1202 21:37:56.278057  483106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:37:56.278086  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.278126  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:56.294975  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.296343  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:56.394339  483106 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:37:56.394533  483106 ssh_runner.go:195] Run: systemctl --version
	I1202 21:37:56.493105  483106 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:37:56.493163  483106 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:37:56.493186  483106 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:37:56.493258  483106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:37:56.530464  483106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:37:56.534763  483106 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:37:56.534813  483106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:37:56.534914  483106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:37:56.542668  483106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:37:56.542693  483106 start.go:496] detecting cgroup driver to use...
	I1202 21:37:56.542754  483106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:37:56.542818  483106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:37:56.557769  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:37:56.570749  483106 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:37:56.570845  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:37:56.586179  483106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:37:56.599149  483106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:37:56.708191  483106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:37:56.842013  483106 docker.go:234] disabling docker service ...
	I1202 21:37:56.842082  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:37:56.857073  483106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:37:56.870370  483106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:37:56.987213  483106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:37:57.106635  483106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:37:57.119596  483106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:37:57.132314  483106 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 21:37:57.133557  483106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:37:57.133663  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.142404  483106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:37:57.142548  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.151265  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.160043  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.168450  483106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:37:57.177232  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.186240  483106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.194528  483106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.203498  483106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:37:57.209931  483106 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:37:57.210879  483106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:37:57.218360  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.328965  483106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:37:57.485223  483106 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:37:57.485296  483106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:37:57.489286  483106 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 21:37:57.489311  483106 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:37:57.489318  483106 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 21:37:57.489325  483106 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:57.489330  483106 command_runner.go:130] > Access: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489343  483106 command_runner.go:130] > Modify: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489348  483106 command_runner.go:130] > Change: 2025-12-02 21:37:57.436771407 +0000
	I1202 21:37:57.489352  483106 command_runner.go:130] >  Birth: -
	I1202 21:37:57.489576  483106 start.go:564] Will wait 60s for crictl version
	I1202 21:37:57.489633  483106 ssh_runner.go:195] Run: which crictl
	I1202 21:37:57.495444  483106 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:37:57.495541  483106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:37:57.522065  483106 command_runner.go:130] > Version:  0.1.0
	I1202 21:37:57.522330  483106 command_runner.go:130] > RuntimeName:  cri-o
	I1202 21:37:57.522612  483106 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 21:37:57.522814  483106 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:37:57.525085  483106 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:37:57.525167  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.560503  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.560529  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.560537  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.560542  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.560547  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.560551  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.560555  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.560560  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.560564  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.560568  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.560572  483106 command_runner.go:130] >      static
	I1202 21:37:57.560580  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.560584  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.560589  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.560595  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.560598  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.560603  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.560612  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.560616  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.560620  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.563007  483106 ssh_runner.go:195] Run: crio --version
	I1202 21:37:57.589712  483106 command_runner.go:130] > crio version 1.34.2
	I1202 21:37:57.589787  483106 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 21:37:57.589809  483106 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 21:37:57.589825  483106 command_runner.go:130] >    GitTreeState:   dirty
	I1202 21:37:57.589855  483106 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 21:37:57.589880  483106 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 21:37:57.589897  483106 command_runner.go:130] >    Compiler:       gc
	I1202 21:37:57.589914  483106 command_runner.go:130] >    Platform:       linux/arm64
	I1202 21:37:57.589955  483106 command_runner.go:130] >    Linkmode:       static
	I1202 21:37:57.589975  483106 command_runner.go:130] >    BuildTags:
	I1202 21:37:57.589991  483106 command_runner.go:130] >      static
	I1202 21:37:57.590007  483106 command_runner.go:130] >      netgo
	I1202 21:37:57.590023  483106 command_runner.go:130] >      osusergo
	I1202 21:37:57.590049  483106 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 21:37:57.590069  483106 command_runner.go:130] >      seccomp
	I1202 21:37:57.590086  483106 command_runner.go:130] >      apparmor
	I1202 21:37:57.590103  483106 command_runner.go:130] >      selinux
	I1202 21:37:57.590120  483106 command_runner.go:130] >    LDFlags:          unknown
	I1202 21:37:57.590146  483106 command_runner.go:130] >    SeccompEnabled:   true
	I1202 21:37:57.590164  483106 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 21:37:57.593809  483106 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:37:57.595025  483106 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:37:57.611773  483106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:37:57.615442  483106 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:37:57.615683  483106 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:37:57.615790  483106 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:37:57.615841  483106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:37:57.645971  483106 command_runner.go:130] > {
	I1202 21:37:57.645994  483106 command_runner.go:130] >   "images":  [
	I1202 21:37:57.645998  483106 command_runner.go:130] >     {
	I1202 21:37:57.646007  483106 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:37:57.646011  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646017  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:37:57.646020  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646024  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646033  483106 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 21:37:57.646036  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646041  483106 command_runner.go:130] >       "size":  "29035622",
	I1202 21:37:57.646045  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646049  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646052  483106 command_runner.go:130] >     },
	I1202 21:37:57.646054  483106 command_runner.go:130] >     {
	I1202 21:37:57.646060  483106 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:37:57.646068  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646074  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:37:57.646077  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646080  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646088  483106 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 21:37:57.646096  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646101  483106 command_runner.go:130] >       "size":  "74488375",
	I1202 21:37:57.646105  483106 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:37:57.646109  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646112  483106 command_runner.go:130] >     },
	I1202 21:37:57.646115  483106 command_runner.go:130] >     {
	I1202 21:37:57.646121  483106 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:37:57.646124  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646129  483106 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:37:57.646132  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646136  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646147  483106 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 21:37:57.646150  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646157  483106 command_runner.go:130] >       "size":  "60854229",
	I1202 21:37:57.646161  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646165  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646168  483106 command_runner.go:130] >       },
	I1202 21:37:57.646172  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646175  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646178  483106 command_runner.go:130] >     },
	I1202 21:37:57.646181  483106 command_runner.go:130] >     {
	I1202 21:37:57.646187  483106 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:37:57.646191  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646196  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:37:57.646200  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646203  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646211  483106 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 21:37:57.646216  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646220  483106 command_runner.go:130] >       "size":  "84947242",
	I1202 21:37:57.646223  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646227  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646230  483106 command_runner.go:130] >       },
	I1202 21:37:57.646234  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646238  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646241  483106 command_runner.go:130] >     },
	I1202 21:37:57.646243  483106 command_runner.go:130] >     {
	I1202 21:37:57.646250  483106 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:37:57.646253  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646259  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:37:57.646262  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646266  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646274  483106 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 21:37:57.646277  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646285  483106 command_runner.go:130] >       "size":  "72167568",
	I1202 21:37:57.646289  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646292  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646299  483106 command_runner.go:130] >       },
	I1202 21:37:57.646305  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646309  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646313  483106 command_runner.go:130] >     },
	I1202 21:37:57.646316  483106 command_runner.go:130] >     {
	I1202 21:37:57.646322  483106 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:37:57.646326  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646331  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:37:57.646334  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646338  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646345  483106 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 21:37:57.646348  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646352  483106 command_runner.go:130] >       "size":  "74105124",
	I1202 21:37:57.646356  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646360  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646363  483106 command_runner.go:130] >     },
	I1202 21:37:57.646365  483106 command_runner.go:130] >     {
	I1202 21:37:57.646372  483106 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:37:57.646375  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646381  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:37:57.646384  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646387  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646399  483106 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 21:37:57.646403  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646406  483106 command_runner.go:130] >       "size":  "49819792",
	I1202 21:37:57.646409  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646413  483106 command_runner.go:130] >         "value":  "0"
	I1202 21:37:57.646416  483106 command_runner.go:130] >       },
	I1202 21:37:57.646421  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646424  483106 command_runner.go:130] >       "pinned":  false
	I1202 21:37:57.646427  483106 command_runner.go:130] >     },
	I1202 21:37:57.646430  483106 command_runner.go:130] >     {
	I1202 21:37:57.646436  483106 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:37:57.646443  483106 command_runner.go:130] >       "repoTags":  [
	I1202 21:37:57.646447  483106 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.646450  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646454  483106 command_runner.go:130] >       "repoDigests":  [
	I1202 21:37:57.646461  483106 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 21:37:57.646464  483106 command_runner.go:130] >       ],
	I1202 21:37:57.646468  483106 command_runner.go:130] >       "size":  "517328",
	I1202 21:37:57.646471  483106 command_runner.go:130] >       "uid":  {
	I1202 21:37:57.646474  483106 command_runner.go:130] >         "value":  "65535"
	I1202 21:37:57.646477  483106 command_runner.go:130] >       },
	I1202 21:37:57.646481  483106 command_runner.go:130] >       "username":  "",
	I1202 21:37:57.646485  483106 command_runner.go:130] >       "pinned":  true
	I1202 21:37:57.646488  483106 command_runner.go:130] >     }
	I1202 21:37:57.646491  483106 command_runner.go:130] >   ]
	I1202 21:37:57.646493  483106 command_runner.go:130] > }
	I1202 21:37:57.648114  483106 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:37:57.648141  483106 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:37:57.648149  483106 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:37:57.648254  483106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:37:57.648333  483106 ssh_runner.go:195] Run: crio config
	I1202 21:37:57.700265  483106 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 21:37:57.700298  483106 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 21:37:57.700306  483106 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 21:37:57.700310  483106 command_runner.go:130] > #
	I1202 21:37:57.700318  483106 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 21:37:57.700324  483106 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 21:37:57.700331  483106 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 21:37:57.700339  483106 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 21:37:57.700343  483106 command_runner.go:130] > # reload'.
	I1202 21:37:57.700350  483106 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 21:37:57.700357  483106 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 21:37:57.700363  483106 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 21:37:57.700373  483106 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 21:37:57.700376  483106 command_runner.go:130] > [crio]
	I1202 21:37:57.700387  483106 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 21:37:57.700395  483106 command_runner.go:130] > # containers images, in this directory.
	I1202 21:37:57.700407  483106 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 21:37:57.700421  483106 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 21:37:57.700427  483106 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 21:37:57.700434  483106 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 21:37:57.700447  483106 command_runner.go:130] > # imagestore = ""
	I1202 21:37:57.700456  483106 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 21:37:57.700462  483106 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 21:37:57.700469  483106 command_runner.go:130] > # storage_driver = "overlay"
	I1202 21:37:57.700475  483106 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 21:37:57.700484  483106 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 21:37:57.700488  483106 command_runner.go:130] > # storage_option = [
	I1202 21:37:57.700493  483106 command_runner.go:130] > # ]
	I1202 21:37:57.700499  483106 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 21:37:57.700508  483106 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 21:37:57.700513  483106 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 21:37:57.700520  483106 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 21:37:57.700528  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 21:37:57.700532  483106 command_runner.go:130] > # always happen on a node reboot
	I1202 21:37:57.700541  483106 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 21:37:57.700555  483106 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 21:37:57.700563  483106 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 21:37:57.700568  483106 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 21:37:57.700573  483106 command_runner.go:130] > # version_file_persist = ""
	I1202 21:37:57.700587  483106 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 21:37:57.700595  483106 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 21:37:57.700603  483106 command_runner.go:130] > # internal_wipe = true
	I1202 21:37:57.700612  483106 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 21:37:57.700617  483106 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 21:37:57.700629  483106 command_runner.go:130] > # internal_repair = true
	I1202 21:37:57.700634  483106 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 21:37:57.700640  483106 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 21:37:57.700650  483106 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 21:37:57.700656  483106 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 21:37:57.700661  483106 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 21:37:57.700667  483106 command_runner.go:130] > [crio.api]
	I1202 21:37:57.700672  483106 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 21:37:57.700677  483106 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 21:37:57.700685  483106 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 21:37:57.700690  483106 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 21:37:57.700699  483106 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 21:37:57.700710  483106 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 21:37:57.700714  483106 command_runner.go:130] > # stream_port = "0"
	I1202 21:37:57.700720  483106 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 21:37:57.700725  483106 command_runner.go:130] > # stream_enable_tls = false
	I1202 21:37:57.700731  483106 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 21:37:57.700954  483106 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 21:37:57.700969  483106 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 21:37:57.700976  483106 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 21:37:57.700981  483106 command_runner.go:130] > # stream_tls_cert = ""
	I1202 21:37:57.700988  483106 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 21:37:57.700994  483106 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 21:37:57.701175  483106 command_runner.go:130] > # stream_tls_key = ""
	I1202 21:37:57.701188  483106 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 21:37:57.701195  483106 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 21:37:57.701200  483106 command_runner.go:130] > # automatically pick up the changes.
	I1202 21:37:57.701204  483106 command_runner.go:130] > # stream_tls_ca = ""
	I1202 21:37:57.701226  483106 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701255  483106 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 21:37:57.701272  483106 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 21:37:57.701278  483106 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 21:37:57.701285  483106 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 21:37:57.701296  483106 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 21:37:57.701300  483106 command_runner.go:130] > [crio.runtime]
	I1202 21:37:57.701306  483106 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 21:37:57.701315  483106 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 21:37:57.701318  483106 command_runner.go:130] > # "nofile=1024:2048"
	I1202 21:37:57.701324  483106 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 21:37:57.701328  483106 command_runner.go:130] > # default_ulimits = [
	I1202 21:37:57.701331  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701338  483106 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 21:37:57.701348  483106 command_runner.go:130] > # no_pivot = false
	I1202 21:37:57.701354  483106 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 21:37:57.701360  483106 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 21:37:57.701368  483106 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 21:37:57.701374  483106 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 21:37:57.701385  483106 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 21:37:57.701395  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701399  483106 command_runner.go:130] > # conmon = ""
	I1202 21:37:57.701403  483106 command_runner.go:130] > # Cgroup setting for conmon
	I1202 21:37:57.701410  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 21:37:57.701414  483106 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 21:37:57.701420  483106 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 21:37:57.701425  483106 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 21:37:57.701432  483106 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 21:37:57.701438  483106 command_runner.go:130] > # conmon_env = [
	I1202 21:37:57.701441  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701447  483106 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 21:37:57.701459  483106 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 21:37:57.701465  483106 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 21:37:57.701470  483106 command_runner.go:130] > # default_env = [
	I1202 21:37:57.701475  483106 command_runner.go:130] > # ]
	I1202 21:37:57.701481  483106 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 21:37:57.701491  483106 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 21:37:57.701495  483106 command_runner.go:130] > # selinux = false
	I1202 21:37:57.701501  483106 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 21:37:57.701509  483106 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 21:37:57.701516  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701526  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.701533  483106 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 21:37:57.701541  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701545  483106 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 21:37:57.701551  483106 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 21:37:57.701559  483106 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 21:37:57.701566  483106 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 21:37:57.701575  483106 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 21:37:57.701580  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701584  483106 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 21:37:57.701590  483106 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 21:37:57.701595  483106 command_runner.go:130] > # the cgroup blockio controller.
	I1202 21:37:57.701601  483106 command_runner.go:130] > # blockio_config_file = ""
	I1202 21:37:57.701608  483106 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 21:37:57.701614  483106 command_runner.go:130] > # blockio parameters.
	I1202 21:37:57.701618  483106 command_runner.go:130] > # blockio_reload = false
	I1202 21:37:57.701625  483106 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 21:37:57.701628  483106 command_runner.go:130] > # irqbalance daemon.
	I1202 21:37:57.701634  483106 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 21:37:57.701642  483106 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 21:37:57.701649  483106 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 21:37:57.701659  483106 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 21:37:57.701689  483106 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 21:37:57.701703  483106 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 21:37:57.701707  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.701711  483106 command_runner.go:130] > # rdt_config_file = ""
	I1202 21:37:57.701717  483106 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 21:37:57.701723  483106 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 21:37:57.701730  483106 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 21:37:57.701736  483106 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 21:37:57.701742  483106 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 21:37:57.701751  483106 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 21:37:57.701755  483106 command_runner.go:130] > # will be added.
	I1202 21:37:57.701763  483106 command_runner.go:130] > # default_capabilities = [
	I1202 21:37:57.701968  483106 command_runner.go:130] > # 	"CHOWN",
	I1202 21:37:57.702017  483106 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 21:37:57.702029  483106 command_runner.go:130] > # 	"FSETID",
	I1202 21:37:57.702033  483106 command_runner.go:130] > # 	"FOWNER",
	I1202 21:37:57.702037  483106 command_runner.go:130] > # 	"SETGID",
	I1202 21:37:57.702040  483106 command_runner.go:130] > # 	"SETUID",
	I1202 21:37:57.702175  483106 command_runner.go:130] > # 	"SETPCAP",
	I1202 21:37:57.702197  483106 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 21:37:57.702202  483106 command_runner.go:130] > # 	"KILL",
	I1202 21:37:57.702205  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702213  483106 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 21:37:57.702220  483106 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 21:37:57.702225  483106 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 21:37:57.702232  483106 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 21:37:57.702247  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702251  483106 command_runner.go:130] > default_sysctls = [
	I1202 21:37:57.702282  483106 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 21:37:57.702290  483106 command_runner.go:130] > ]
	I1202 21:37:57.702302  483106 command_runner.go:130] > # List of devices on the host that a
	I1202 21:37:57.702309  483106 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 21:37:57.702317  483106 command_runner.go:130] > # allowed_devices = [
	I1202 21:37:57.702321  483106 command_runner.go:130] > # 	"/dev/fuse",
	I1202 21:37:57.702326  483106 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 21:37:57.702496  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702509  483106 command_runner.go:130] > # List of additional devices. specified as
	I1202 21:37:57.702523  483106 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 21:37:57.702529  483106 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 21:37:57.702539  483106 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 21:37:57.702546  483106 command_runner.go:130] > # additional_devices = [
	I1202 21:37:57.702553  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702559  483106 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 21:37:57.702562  483106 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 21:37:57.702593  483106 command_runner.go:130] > # 	"/etc/cdi",
	I1202 21:37:57.702605  483106 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 21:37:57.702609  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702616  483106 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 21:37:57.702632  483106 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 21:37:57.702636  483106 command_runner.go:130] > # Defaults to false.
	I1202 21:37:57.702641  483106 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 21:37:57.702647  483106 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 21:37:57.702655  483106 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 21:37:57.702659  483106 command_runner.go:130] > # hooks_dir = [
	I1202 21:37:57.702849  483106 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 21:37:57.702860  483106 command_runner.go:130] > # ]
	I1202 21:37:57.702867  483106 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 21:37:57.702879  483106 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 21:37:57.702886  483106 command_runner.go:130] > # its default mounts from the following two files:
	I1202 21:37:57.702893  483106 command_runner.go:130] > #
	I1202 21:37:57.702899  483106 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 21:37:57.702905  483106 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 21:37:57.702911  483106 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 21:37:57.702913  483106 command_runner.go:130] > #
	I1202 21:37:57.702919  483106 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 21:37:57.702925  483106 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 21:37:57.702932  483106 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 21:37:57.702937  483106 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 21:37:57.702942  483106 command_runner.go:130] > #
	I1202 21:37:57.702974  483106 command_runner.go:130] > # default_mounts_file = ""
	I1202 21:37:57.702983  483106 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 21:37:57.702990  483106 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 21:37:57.703009  483106 command_runner.go:130] > # pids_limit = -1
	I1202 21:37:57.703018  483106 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 21:37:57.703024  483106 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 21:37:57.703030  483106 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 21:37:57.703039  483106 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 21:37:57.703043  483106 command_runner.go:130] > # log_size_max = -1
	I1202 21:37:57.703053  483106 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 21:37:57.703070  483106 command_runner.go:130] > # log_to_journald = false
	I1202 21:37:57.703082  483106 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 21:37:57.703090  483106 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 21:37:57.703102  483106 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 21:37:57.703112  483106 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 21:37:57.703121  483106 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 21:37:57.703294  483106 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 21:37:57.703314  483106 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 21:37:57.703388  483106 command_runner.go:130] > # read_only = false
	I1202 21:37:57.703403  483106 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 21:37:57.703410  483106 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 21:37:57.703414  483106 command_runner.go:130] > # live configuration reload.
	I1202 21:37:57.703418  483106 command_runner.go:130] > # log_level = "info"
	I1202 21:37:57.703429  483106 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 21:37:57.703434  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.703441  483106 command_runner.go:130] > # log_filter = ""
	I1202 21:37:57.703448  483106 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703456  483106 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 21:37:57.703459  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703467  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703471  483106 command_runner.go:130] > # uid_mappings = ""
	I1202 21:37:57.703477  483106 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 21:37:57.703489  483106 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 21:37:57.703492  483106 command_runner.go:130] > # separated by comma.
	I1202 21:37:57.703500  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703504  483106 command_runner.go:130] > # gid_mappings = ""
	I1202 21:37:57.703510  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 21:37:57.703518  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703524  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703532  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703561  483106 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 21:37:57.703582  483106 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 21:37:57.703590  483106 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 21:37:57.703596  483106 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 21:37:57.703606  483106 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 21:37:57.703769  483106 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 21:37:57.703787  483106 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 21:37:57.703803  483106 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 21:37:57.703810  483106 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 21:37:57.703970  483106 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 21:37:57.703985  483106 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 21:37:57.703996  483106 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 21:37:57.704002  483106 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 21:37:57.704010  483106 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 21:37:57.704013  483106 command_runner.go:130] > # drop_infra_ctr = true
	I1202 21:37:57.704023  483106 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 21:37:57.704035  483106 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 21:37:57.704043  483106 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 21:37:57.704046  483106 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 21:37:57.704053  483106 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 21:37:57.704059  483106 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 21:37:57.704066  483106 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 21:37:57.704073  483106 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 21:37:57.704077  483106 command_runner.go:130] > # shared_cpuset = ""
	I1202 21:37:57.704088  483106 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 21:37:57.704094  483106 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 21:37:57.704098  483106 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 21:37:57.704111  483106 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 21:37:57.704115  483106 command_runner.go:130] > # pinns_path = ""
	I1202 21:37:57.704126  483106 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 21:37:57.704133  483106 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 21:37:57.704159  483106 command_runner.go:130] > # enable_criu_support = true
	I1202 21:37:57.704170  483106 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 21:37:57.704177  483106 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 21:37:57.704281  483106 command_runner.go:130] > # enable_pod_events = false
	I1202 21:37:57.704302  483106 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 21:37:57.704308  483106 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 21:37:57.704428  483106 command_runner.go:130] > # default_runtime = "crun"
	I1202 21:37:57.704441  483106 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 21:37:57.704455  483106 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 21:37:57.704470  483106 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 21:37:57.704476  483106 command_runner.go:130] > # creation as a file is not desired either.
	I1202 21:37:57.704485  483106 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 21:37:57.704501  483106 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 21:37:57.704506  483106 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 21:37:57.704638  483106 command_runner.go:130] > # ]
	I1202 21:37:57.704649  483106 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 21:37:57.704656  483106 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 21:37:57.704663  483106 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 21:37:57.704668  483106 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 21:37:57.704671  483106 command_runner.go:130] > #
	I1202 21:37:57.704676  483106 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 21:37:57.704681  483106 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 21:37:57.704688  483106 command_runner.go:130] > # runtime_type = "oci"
	I1202 21:37:57.704693  483106 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 21:37:57.704697  483106 command_runner.go:130] > # inherit_default_runtime = false
	I1202 21:37:57.704710  483106 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 21:37:57.704715  483106 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 21:37:57.704720  483106 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 21:37:57.704728  483106 command_runner.go:130] > # monitor_env = []
	I1202 21:37:57.704733  483106 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 21:37:57.704737  483106 command_runner.go:130] > # allowed_annotations = []
	I1202 21:37:57.704743  483106 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 21:37:57.704749  483106 command_runner.go:130] > # no_sync_log = false
	I1202 21:37:57.704753  483106 command_runner.go:130] > # default_annotations = {}
	I1202 21:37:57.704757  483106 command_runner.go:130] > # stream_websockets = false
	I1202 21:37:57.704761  483106 command_runner.go:130] > # seccomp_profile = ""
	I1202 21:37:57.704791  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.704803  483106 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 21:37:57.704810  483106 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 21:37:57.704816  483106 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 21:37:57.704822  483106 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 21:37:57.704828  483106 command_runner.go:130] > #   in $PATH.
	I1202 21:37:57.704835  483106 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 21:37:57.704844  483106 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 21:37:57.704850  483106 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 21:37:57.704853  483106 command_runner.go:130] > #   state.
	I1202 21:37:57.704859  483106 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 21:37:57.704870  483106 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 21:37:57.704879  483106 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 21:37:57.704885  483106 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 21:37:57.704891  483106 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 21:37:57.704899  483106 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 21:37:57.704907  483106 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 21:37:57.704917  483106 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 21:37:57.704923  483106 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 21:37:57.704931  483106 command_runner.go:130] > #   The currently recognized values are:
	I1202 21:37:57.704940  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 21:37:57.704947  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 21:37:57.704954  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 21:37:57.704962  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 21:37:57.704969  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 21:37:57.704978  483106 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 21:37:57.704985  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 21:37:57.704992  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 21:37:57.705001  483106 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 21:37:57.705008  483106 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 21:37:57.705017  483106 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 21:37:57.705023  483106 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 21:37:57.705029  483106 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 21:37:57.705035  483106 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 21:37:57.705045  483106 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 21:37:57.705054  483106 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 21:37:57.705068  483106 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 21:37:57.705072  483106 command_runner.go:130] > #   deprecated option "conmon".
	I1202 21:37:57.705080  483106 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 21:37:57.705088  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 21:37:57.705095  483106 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 21:37:57.705101  483106 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 21:37:57.705108  483106 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 21:37:57.705113  483106 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 21:37:57.705129  483106 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 21:37:57.705135  483106 command_runner.go:130] > #   conmon-rs by using:
	I1202 21:37:57.705143  483106 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 21:37:57.705154  483106 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 21:37:57.705165  483106 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 21:37:57.705176  483106 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 21:37:57.705183  483106 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 21:37:57.705191  483106 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 21:37:57.705198  483106 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 21:37:57.705203  483106 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 21:37:57.705214  483106 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 21:37:57.705222  483106 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 21:37:57.705228  483106 command_runner.go:130] > #   when a machine crash happens.
	I1202 21:37:57.705235  483106 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 21:37:57.705243  483106 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 21:37:57.705253  483106 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 21:37:57.705257  483106 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 21:37:57.705263  483106 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 21:37:57.705273  483106 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 21:37:57.705275  483106 command_runner.go:130] > #
	I1202 21:37:57.705280  483106 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 21:37:57.705285  483106 command_runner.go:130] > #
	I1202 21:37:57.705292  483106 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 21:37:57.705301  483106 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 21:37:57.705304  483106 command_runner.go:130] > #
	I1202 21:37:57.705310  483106 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 21:37:57.705317  483106 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 21:37:57.705322  483106 command_runner.go:130] > #
	I1202 21:37:57.705328  483106 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 21:37:57.705331  483106 command_runner.go:130] > # feature.
	I1202 21:37:57.705336  483106 command_runner.go:130] > #
	I1202 21:37:57.705342  483106 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 21:37:57.705350  483106 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 21:37:57.705360  483106 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 21:37:57.705367  483106 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 21:37:57.705375  483106 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 21:37:57.705382  483106 command_runner.go:130] > #
	I1202 21:37:57.705388  483106 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 21:37:57.705397  483106 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 21:37:57.705399  483106 command_runner.go:130] > #
	I1202 21:37:57.705405  483106 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 21:37:57.705411  483106 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 21:37:57.705416  483106 command_runner.go:130] > #
	I1202 21:37:57.705422  483106 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 21:37:57.705428  483106 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 21:37:57.705433  483106 command_runner.go:130] > # limitation.
	I1202 21:37:57.705469  483106 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 21:37:57.705480  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 21:37:57.705484  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705488  483106 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 21:37:57.705492  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705499  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705503  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705510  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705514  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705518  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705521  483106 command_runner.go:130] > allowed_annotations = [
	I1202 21:37:57.705734  483106 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 21:37:57.705745  483106 command_runner.go:130] > ]
	I1202 21:37:57.705770  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705779  483106 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 21:37:57.705849  483106 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 21:37:57.705872  483106 command_runner.go:130] > runtime_type = ""
	I1202 21:37:57.705883  483106 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 21:37:57.705901  483106 command_runner.go:130] > inherit_default_runtime = false
	I1202 21:37:57.705906  483106 command_runner.go:130] > runtime_config_path = ""
	I1202 21:37:57.705910  483106 command_runner.go:130] > container_min_memory = ""
	I1202 21:37:57.705915  483106 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 21:37:57.705921  483106 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 21:37:57.705925  483106 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 21:37:57.705929  483106 command_runner.go:130] > privileged_without_host_devices = false
	I1202 21:37:57.705937  483106 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 21:37:57.705944  483106 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 21:37:57.705965  483106 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 21:37:57.705974  483106 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 21:37:57.705985  483106 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 21:37:57.706000  483106 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 21:37:57.706009  483106 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 21:37:57.706015  483106 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 21:37:57.706025  483106 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 21:37:57.706051  483106 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 21:37:57.706057  483106 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 21:37:57.706077  483106 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 21:37:57.706082  483106 command_runner.go:130] > # Example:
	I1202 21:37:57.706087  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 21:37:57.706091  483106 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 21:37:57.706096  483106 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 21:37:57.706102  483106 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 21:37:57.706105  483106 command_runner.go:130] > # cpuset = "0-1"
	I1202 21:37:57.706108  483106 command_runner.go:130] > # cpushares = "5"
	I1202 21:37:57.706112  483106 command_runner.go:130] > # cpuquota = "1000"
	I1202 21:37:57.706116  483106 command_runner.go:130] > # cpuperiod = "100000"
	I1202 21:37:57.706120  483106 command_runner.go:130] > # cpulimit = "35"
	I1202 21:37:57.706126  483106 command_runner.go:130] > # Where:
	I1202 21:37:57.706131  483106 command_runner.go:130] > # The workload name is workload-type.
	I1202 21:37:57.706143  483106 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 21:37:57.706160  483106 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 21:37:57.706180  483106 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 21:37:57.706189  483106 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 21:37:57.706195  483106 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 21:37:57.706229  483106 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 21:37:57.706243  483106 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 21:37:57.706247  483106 command_runner.go:130] > # Default value is set to true
	I1202 21:37:57.706253  483106 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 21:37:57.706261  483106 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 21:37:57.706266  483106 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 21:37:57.706271  483106 command_runner.go:130] > # Default value is set to 'false'
	I1202 21:37:57.706275  483106 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 21:37:57.706280  483106 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 21:37:57.706291  483106 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 21:37:57.706299  483106 command_runner.go:130] > # timezone = ""
	I1202 21:37:57.706306  483106 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 21:37:57.706308  483106 command_runner.go:130] > #
	I1202 21:37:57.706315  483106 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 21:37:57.706326  483106 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 21:37:57.706329  483106 command_runner.go:130] > [crio.image]
	I1202 21:37:57.706338  483106 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 21:37:57.706348  483106 command_runner.go:130] > # default_transport = "docker://"
	I1202 21:37:57.706354  483106 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 21:37:57.706360  483106 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706497  483106 command_runner.go:130] > # global_auth_file = ""
	I1202 21:37:57.706512  483106 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 21:37:57.706518  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706617  483106 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 21:37:57.706659  483106 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 21:37:57.706671  483106 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 21:37:57.706677  483106 command_runner.go:130] > # This option supports live configuration reload.
	I1202 21:37:57.706682  483106 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 21:37:57.706688  483106 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 21:37:57.706698  483106 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 21:37:57.706714  483106 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 21:37:57.706730  483106 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 21:37:57.706734  483106 command_runner.go:130] > # pause_command = "/pause"
	I1202 21:37:57.706749  483106 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 21:37:57.706756  483106 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 21:37:57.706771  483106 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 21:37:57.706777  483106 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 21:37:57.706783  483106 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 21:37:57.706791  483106 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 21:37:57.706795  483106 command_runner.go:130] > # pinned_images = [
	I1202 21:37:57.706798  483106 command_runner.go:130] > # ]
	I1202 21:37:57.706806  483106 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 21:37:57.706813  483106 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 21:37:57.706822  483106 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 21:37:57.706828  483106 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 21:37:57.706834  483106 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 21:37:57.707022  483106 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 21:37:57.707046  483106 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 21:37:57.707056  483106 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 21:37:57.707066  483106 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 21:37:57.707073  483106 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 21:37:57.707084  483106 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 21:37:57.707105  483106 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 21:37:57.707129  483106 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 21:37:57.707141  483106 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 21:37:57.707146  483106 command_runner.go:130] > # changing them here.
	I1202 21:37:57.707158  483106 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 21:37:57.707163  483106 command_runner.go:130] > # insecure_registries = [
	I1202 21:37:57.707278  483106 command_runner.go:130] > # ]
	I1202 21:37:57.707303  483106 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 21:37:57.707309  483106 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 21:37:57.707323  483106 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 21:37:57.707334  483106 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 21:37:57.707518  483106 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 21:37:57.707543  483106 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 21:37:57.707551  483106 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 21:37:57.707565  483106 command_runner.go:130] > # auto_reload_registries = false
	I1202 21:37:57.707577  483106 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 21:37:57.707586  483106 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 21:37:57.707593  483106 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 21:37:57.707601  483106 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 21:37:57.707626  483106 command_runner.go:130] > # The mode of short name resolution.
	I1202 21:37:57.707639  483106 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 21:37:57.707646  483106 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 21:37:57.707652  483106 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 21:37:57.707737  483106 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 21:37:57.707776  483106 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 21:37:57.707797  483106 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 21:37:57.707804  483106 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 21:37:57.707810  483106 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 21:37:57.707814  483106 command_runner.go:130] > # CNI plugins.
	I1202 21:37:57.707818  483106 command_runner.go:130] > [crio.network]
	I1202 21:37:57.707825  483106 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 21:37:57.707834  483106 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 21:37:57.707838  483106 command_runner.go:130] > # cni_default_network = ""
	I1202 21:37:57.707843  483106 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 21:37:57.707880  483106 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 21:37:57.707894  483106 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 21:37:57.707898  483106 command_runner.go:130] > # plugin_dirs = [
	I1202 21:37:57.708100  483106 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 21:37:57.708328  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708337  483106 command_runner.go:130] > # List of included pod metrics.
	I1202 21:37:57.708504  483106 command_runner.go:130] > # included_pod_metrics = [
	I1202 21:37:57.708692  483106 command_runner.go:130] > # ]
	I1202 21:37:57.708716  483106 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 21:37:57.708721  483106 command_runner.go:130] > [crio.metrics]
	I1202 21:37:57.708725  483106 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 21:37:57.709042  483106 command_runner.go:130] > # enable_metrics = false
	I1202 21:37:57.709050  483106 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 21:37:57.709056  483106 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 21:37:57.709063  483106 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 21:37:57.709070  483106 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 21:37:57.709082  483106 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 21:37:57.709226  483106 command_runner.go:130] > # metrics_collectors = [
	I1202 21:37:57.709424  483106 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 21:37:57.709616  483106 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 21:37:57.709807  483106 command_runner.go:130] > # 	"containers_oom_total",
	I1202 21:37:57.709999  483106 command_runner.go:130] > # 	"processes_defunct",
	I1202 21:37:57.710186  483106 command_runner.go:130] > # 	"operations_total",
	I1202 21:37:57.710377  483106 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 21:37:57.710569  483106 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 21:37:57.710759  483106 command_runner.go:130] > # 	"operations_errors_total",
	I1202 21:37:57.710953  483106 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 21:37:57.711154  483106 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 21:37:57.711347  483106 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 21:37:57.711541  483106 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 21:37:57.711734  483106 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 21:37:57.711929  483106 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 21:37:57.712114  483106 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 21:37:57.712326  483106 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 21:37:57.712521  483106 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 21:37:57.712708  483106 command_runner.go:130] > # ]
	I1202 21:37:57.712718  483106 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 21:37:57.713101  483106 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 21:37:57.713111  483106 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 21:37:57.713462  483106 command_runner.go:130] > # metrics_port = 9090
	I1202 21:37:57.713472  483106 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 21:37:57.713766  483106 command_runner.go:130] > # metrics_socket = ""
	I1202 21:37:57.713798  483106 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 21:37:57.713843  483106 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 21:37:57.713867  483106 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 21:37:57.713890  483106 command_runner.go:130] > # certificate on any modification event.
	I1202 21:37:57.714026  483106 command_runner.go:130] > # metrics_cert = ""
	I1202 21:37:57.714049  483106 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 21:37:57.714055  483106 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 21:37:57.714333  483106 command_runner.go:130] > # metrics_key = ""
	I1202 21:37:57.714367  483106 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 21:37:57.714411  483106 command_runner.go:130] > [crio.tracing]
	I1202 21:37:57.714434  483106 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 21:37:57.714690  483106 command_runner.go:130] > # enable_tracing = false
	I1202 21:37:57.714730  483106 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 21:37:57.715040  483106 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 21:37:57.715074  483106 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 21:37:57.715400  483106 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 21:37:57.715424  483106 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 21:37:57.715465  483106 command_runner.go:130] > [crio.nri]
	I1202 21:37:57.715486  483106 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 21:37:57.715706  483106 command_runner.go:130] > # enable_nri = true
	I1202 21:37:57.715731  483106 command_runner.go:130] > # NRI socket to listen on.
	I1202 21:37:57.716042  483106 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 21:37:57.716072  483106 command_runner.go:130] > # NRI plugin directory to use.
	I1202 21:37:57.716381  483106 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 21:37:57.716412  483106 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 21:37:57.716702  483106 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 21:37:57.716734  483106 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 21:37:57.716910  483106 command_runner.go:130] > # nri_disable_connections = false
	I1202 21:37:57.716983  483106 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 21:37:57.717007  483106 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 21:37:57.717025  483106 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 21:37:57.717040  483106 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 21:37:57.717084  483106 command_runner.go:130] > # NRI default validator configuration.
	I1202 21:37:57.717109  483106 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 21:37:57.717127  483106 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 21:37:57.717180  483106 command_runner.go:130] > # can be restricted/rejected:
	I1202 21:37:57.717207  483106 command_runner.go:130] > # - OCI hook injection
	I1202 21:37:57.717238  483106 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 21:37:57.717387  483106 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 21:37:57.717408  483106 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 21:37:57.717448  483106 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 21:37:57.717469  483106 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 21:37:57.717489  483106 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 21:37:57.717520  483106 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 21:37:57.717542  483106 command_runner.go:130] > #
	I1202 21:37:57.717559  483106 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 21:37:57.717588  483106 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 21:37:57.717614  483106 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 21:37:57.717634  483106 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 21:37:57.717673  483106 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 21:37:57.717700  483106 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 21:37:57.717721  483106 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 21:37:57.717750  483106 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 21:37:57.717775  483106 command_runner.go:130] > # ]
	I1202 21:37:57.717791  483106 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 21:37:57.717809  483106 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 21:37:57.717844  483106 command_runner.go:130] > [crio.stats]
	I1202 21:37:57.717862  483106 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 21:37:57.717880  483106 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 21:37:57.717896  483106 command_runner.go:130] > # stats_collection_period = 0
	I1202 21:37:57.717933  483106 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 21:37:57.717955  483106 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 21:37:57.717969  483106 command_runner.go:130] > # collection_period = 0
	I1202 21:37:57.719581  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.679996811Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 21:37:57.719602  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680035195Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 21:37:57.719612  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680068245Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 21:37:57.719634  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680094978Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 21:37:57.719650  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680175192Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:37:57.719661  483106 command_runner.go:130] ! time="2025-12-02T21:37:57.680551245Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 21:37:57.719673  483106 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 21:37:57.719793  483106 cni.go:84] Creating CNI manager for ""
	I1202 21:37:57.719806  483106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:37:57.719822  483106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:37:57.719854  483106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:37:57.719977  483106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:37:57.720050  483106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:37:57.727128  483106 command_runner.go:130] > kubeadm
	I1202 21:37:57.727200  483106 command_runner.go:130] > kubectl
	I1202 21:37:57.727217  483106 command_runner.go:130] > kubelet
	I1202 21:37:57.727679  483106 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:37:57.727758  483106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:37:57.735128  483106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:37:57.747401  483106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:37:57.759635  483106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 21:37:57.772168  483106 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:37:57.775704  483106 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:37:57.775781  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:57.892482  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:58.414394  483106 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:37:58.414415  483106 certs.go:195] generating shared ca certs ...
	I1202 21:37:58.414431  483106 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:58.414617  483106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:37:58.414690  483106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:37:58.414702  483106 certs.go:257] generating profile certs ...
	I1202 21:37:58.414822  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:37:58.414884  483106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:37:58.414927  483106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:37:58.414939  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:37:58.414953  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:37:58.414964  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:37:58.414980  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:37:58.414991  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:37:58.415019  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:37:58.415030  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:37:58.415042  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:37:58.415094  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:37:58.415127  483106 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:37:58.415140  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:37:58.415171  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:37:58.415199  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:37:58.415223  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:37:58.415279  483106 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:37:58.415327  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.415344  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem -> /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.415358  483106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.415948  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:37:58.434575  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:37:58.454217  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:37:58.476636  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:37:58.499852  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:37:58.517799  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:37:58.537626  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:37:58.556051  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:37:58.573621  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:37:58.591561  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:37:58.609240  483106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:37:58.626214  483106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:37:58.638898  483106 ssh_runner.go:195] Run: openssl version
	I1202 21:37:58.644941  483106 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:37:58.645379  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:37:58.653758  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657242  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657279  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.657350  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:37:58.697450  483106 command_runner.go:130] > b5213941
	I1202 21:37:58.697880  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:37:58.705830  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:37:58.714550  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718238  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718320  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.718390  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:37:58.760939  483106 command_runner.go:130] > 51391683
	I1202 21:37:58.761409  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:37:58.769112  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:37:58.777300  483106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780878  483106 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780914  483106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.780988  483106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:37:58.821311  483106 command_runner.go:130] > 3ec20f2e
	I1202 21:37:58.821773  483106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:37:58.829482  483106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833099  483106 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:37:58.833249  483106 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:37:58.833277  483106 command_runner.go:130] > Device: 259,1	Inode: 1309045     Links: 1
	I1202 21:37:58.833296  483106 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:37:58.833318  483106 command_runner.go:130] > Access: 2025-12-02 21:33:51.106313964 +0000
	I1202 21:37:58.833335  483106 command_runner.go:130] > Modify: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833354  483106 command_runner.go:130] > Change: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833368  483106 command_runner.go:130] >  Birth: 2025-12-02 21:29:47.431869964 +0000
	I1202 21:37:58.833452  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:37:58.873701  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.874162  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:37:58.914810  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.915281  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:37:58.957479  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.957884  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:37:58.998366  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:58.998755  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:37:59.041919  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.042032  483106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:37:59.082406  483106 command_runner.go:130] > Certificate will not expire
	I1202 21:37:59.082849  483106 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:37:59.082947  483106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:37:59.083063  483106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:37:59.109816  483106 cri.go:89] found id: ""
	I1202 21:37:59.109903  483106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:37:59.116871  483106 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:37:59.116937  483106 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:37:59.116958  483106 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:37:59.117791  483106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:37:59.117835  483106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:37:59.117913  483106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:37:59.125060  483106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:37:59.125506  483106 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-066896" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.125617  483106 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-444114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-066896" cluster setting kubeconfig missing "functional-066896" context setting]
	I1202 21:37:59.125900  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.126337  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.126509  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.127095  483106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:37:59.127116  483106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:37:59.127122  483106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:37:59.127127  483106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:37:59.127133  483106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:37:59.127170  483106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:37:59.127484  483106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:37:59.134957  483106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:37:59.134991  483106 kubeadm.go:602] duration metric: took 17.137902ms to restartPrimaryControlPlane
	I1202 21:37:59.135014  483106 kubeadm.go:403] duration metric: took 52.172876ms to StartCluster
	I1202 21:37:59.135029  483106 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135086  483106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.135727  483106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:37:59.135915  483106 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 21:37:59.136175  483106 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:37:59.136232  483106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:37:59.136325  483106 addons.go:70] Setting storage-provisioner=true in profile "functional-066896"
	I1202 21:37:59.136339  483106 addons.go:239] Setting addon storage-provisioner=true in "functional-066896"
	I1202 21:37:59.136375  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.136437  483106 addons.go:70] Setting default-storageclass=true in profile "functional-066896"
	I1202 21:37:59.136458  483106 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-066896"
	I1202 21:37:59.136761  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.136798  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.139277  483106 out.go:179] * Verifying Kubernetes components...
	I1202 21:37:59.140771  483106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:37:59.165976  483106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:37:59.168845  483106 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.168870  483106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:37:59.168937  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.175656  483106 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:37:59.176018  483106 kapi.go:59] client config for functional-066896: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:37:59.176385  483106 addons.go:239] Setting addon default-storageclass=true in "functional-066896"
	I1202 21:37:59.176428  483106 host.go:66] Checking if "functional-066896" exists ...
	I1202 21:37:59.176909  483106 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:37:59.211203  483106 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:37:59.211229  483106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:37:59.211311  483106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:37:59.225207  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.248989  483106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:37:59.349954  483106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:37:59.407494  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:37:59.408663  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.165713  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165766  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165797  483106 retry.go:31] will retry after 202.822033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165873  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.165889  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.165899  483106 retry.go:31] will retry after 281.773783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.166009  483106 node_ready.go:35] waiting up to 6m0s for node "functional-066896" to be "Ready" ...
	I1202 21:38:00.166135  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.166200  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.368900  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.441989  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.442041  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.442063  483106 retry.go:31] will retry after 393.334545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.448331  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:00.512520  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.512571  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.512592  483106 retry.go:31] will retry after 493.57139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.666814  483106 type.go:168] "Request Body" body=""
	I1202 21:38:00.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:00.667270  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:00.835693  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:00.896509  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:00.896567  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:00.896588  483106 retry.go:31] will retry after 517.359335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.006926  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.069882  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.069952  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.069980  483106 retry.go:31] will retry after 823.867865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.167068  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.167622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.415018  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:01.473591  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.473646  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.473665  483106 retry.go:31] will retry after 817.290744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.666990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:01.667103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:01.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:01.894929  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:01.964144  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:01.967581  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:01.967615  483106 retry.go:31] will retry after 586.961084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.167465  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:02.167512  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:02.292000  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:02.348780  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.352211  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.352246  483106 retry.go:31] will retry after 1.098539896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.555610  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:02.616881  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:02.616985  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.617011  483106 retry.go:31] will retry after 1.090026315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:02.667191  483106 type.go:168] "Request Body" body=""
	I1202 21:38:02.667272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:02.667575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.451026  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:03.515404  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.515439  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.515458  483106 retry.go:31] will retry after 2.58724354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:38:03.666944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:03.667328  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:03.707632  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:03.776872  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:03.776924  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:03.776953  483106 retry.go:31] will retry after 972.290717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.166626  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.166706  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:04.666777  483106 type.go:168] "Request Body" body=""
	I1202 21:38:04.666867  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:04.667243  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:04.667303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:04.749460  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:04.810694  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:04.810734  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:04.810752  483106 retry.go:31] will retry after 3.951899284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:05.166161  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.166235  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.166558  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:05.666140  483106 type.go:168] "Request Body" body=""
	I1202 21:38:05.666212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:05.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.102988  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:06.161220  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:06.161263  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.161284  483106 retry.go:31] will retry after 3.838527337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:06.166366  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.166444  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:06.666314  483106 type.go:168] "Request Body" body=""
	I1202 21:38:06.666386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:06.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:07.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.166299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:07.166671  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:07.666338  483106 type.go:168] "Request Body" body=""
	I1202 21:38:07.666425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:07.666777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.166503  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.166606  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.166933  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:38:08.666295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:08.666603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:08.763053  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:08.821648  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:08.821701  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:08.821721  483106 retry.go:31] will retry after 4.430309202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:09.166538  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.166615  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.166964  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:09.167037  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:09.666806  483106 type.go:168] "Request Body" body=""
	I1202 21:38:09.666904  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:09.667263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.001423  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:10.065960  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:10.069561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.069595  483106 retry.go:31] will retry after 4.835447081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:10.166750  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.166827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.167127  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:10.666978  483106 type.go:168] "Request Body" body=""
	I1202 21:38:10.667076  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:10.667385  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:11.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.167266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.167557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:11.167608  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:11.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:38:11.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:11.666586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.166242  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:12.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:38:12.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:12.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.167025  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.167092  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.167359  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:13.252779  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:13.311539  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:13.314561  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.314593  483106 retry.go:31] will retry after 7.77807994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:13.667097  483106 type.go:168] "Request Body" body=""
	I1202 21:38:13.667178  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:13.667555  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:13.667614  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:14.166435  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.166532  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.166857  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.666157  483106 type.go:168] "Request Body" body=""
	I1202 21:38:14.666230  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:14.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:14.906038  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:14.963486  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:14.966545  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:14.966583  483106 retry.go:31] will retry after 9.105443561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:15.166926  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.167018  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.167368  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:15.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:15.666221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:15.666564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:16.166892  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.167321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:16.167385  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:16.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:38:16.667311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:16.667666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.166271  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.166345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.166811  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:17.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:38:17.666246  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:17.666576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.166665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:18.666398  483106 type.go:168] "Request Body" body=""
	I1202 21:38:18.666474  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:18.666809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:18.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:19.167020  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.167103  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.167423  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:19.666169  483106 type.go:168] "Request Body" body=""
	I1202 21:38:19.666247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:19.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.166216  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.166296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.166641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:20.666328  483106 type.go:168] "Request Body" body=""
	I1202 21:38:20.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:20.666687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:21.093408  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:21.149979  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:21.153644  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.153677  483106 retry.go:31] will retry after 11.903983297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:21.166790  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.167199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:21.167253  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:21.666923  483106 type.go:168] "Request Body" body=""
	I1202 21:38:21.667013  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:21.667352  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.166588  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.166661  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.166957  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:22.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:38:22.666921  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:22.667250  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:23.167035  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.167114  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.167459  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:23.167514  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:23.666741  483106 type.go:168] "Request Body" body=""
	I1202 21:38:23.666815  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:23.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.072876  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:24.134664  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:24.134721  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.134742  483106 retry.go:31] will retry after 11.08333461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:24.166922  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.166990  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.167311  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:24.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:38:24.667038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:24.667366  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.167335  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:25.667220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:25.667299  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:25.667607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:25.667651  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:26.166305  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.166387  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.166780  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:26.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:38:26.666584  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:26.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.166286  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.166358  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:27.666223  483106 type.go:168] "Request Body" body=""
	I1202 21:38:27.666297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:27.666627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:28.166866  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.166938  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:28.167314  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:28.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:38:28.667185  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:28.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.166534  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.166912  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:29.666220  483106 type.go:168] "Request Body" body=""
	I1202 21:38:29.666294  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:29.666610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.166321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.166409  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:30.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:38:30.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:30.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:30.666751  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:31.166158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.166232  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.166500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:31.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:38:31.666300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:31.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.166362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:32.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:38:32.666462  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:32.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:32.666785  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:33.058732  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:33.133401  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:33.133437  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.133456  483106 retry.go:31] will retry after 7.836153133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:33.166617  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.167044  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:33.666857  483106 type.go:168] "Request Body" body=""
	I1202 21:38:33.666928  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:33.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.166841  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.166919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:34.666992  483106 type.go:168] "Request Body" body=""
	I1202 21:38:34.667107  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:34.667433  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:34.667486  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:35.166145  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.166224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.166561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:35.218798  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:35.277107  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:35.277160  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.277179  483106 retry.go:31] will retry after 18.212486347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:35.666236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:35.666317  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:35.666575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.166236  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.166653  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:36.666345  483106 type.go:168] "Request Body" body=""
	I1202 21:38:36.666418  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:36.666776  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:37.166874  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.166942  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.167236  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:37.167279  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:37.667058  483106 type.go:168] "Request Body" body=""
	I1202 21:38:37.667144  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:37.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.167192  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.167270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.167629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:38.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:38:38.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:38.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.166835  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.166911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.167230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:39.667062  483106 type.go:168] "Request Body" body=""
	I1202 21:38:39.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:39.667449  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:39.667503  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:40.166787  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.166859  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.666946  483106 type.go:168] "Request Body" body=""
	I1202 21:38:40.667046  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:40.667374  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:40.969813  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:38:41.027522  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:41.030695  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.030727  483106 retry.go:31] will retry after 26.445141412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:41.167017  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.167086  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.167412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:41.667158  483106 type.go:168] "Request Body" body=""
	I1202 21:38:41.667226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:41.667538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:41.667593  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:42.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:42.666412  483106 type.go:168] "Request Body" body=""
	I1202 21:38:42.666487  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:42.666864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.167082  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.167382  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:43.667222  483106 type.go:168] "Request Body" body=""
	I1202 21:38:43.667290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:43.667605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:43.667663  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:44.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.167048  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:44.666563  483106 type.go:168] "Request Body" body=""
	I1202 21:38:44.666635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:44.666906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.166291  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:45.666557  483106 type.go:168] "Request Body" body=""
	I1202 21:38:45.666637  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:45.666980  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:46.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.166248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.166526  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:46.166568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:46.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:38:46.666372  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:46.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.166454  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.166529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.166849  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:47.667114  483106 type.go:168] "Request Body" body=""
	I1202 21:38:47.667196  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:47.667500  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:48.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.166278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.166598  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:48.166644  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:48.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:38:48.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:48.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.166918  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.166985  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.167265  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:49.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:38:49.667124  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:49.667462  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:50.167148  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:50.167600  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:50.666859  483106 type.go:168] "Request Body" body=""
	I1202 21:38:50.666941  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:50.667348  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.166149  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:51.666321  483106 type.go:168] "Request Body" body=""
	I1202 21:38:51.666400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:51.666742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.167091  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.167502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:52.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:38:52.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:52.666630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:52.666682  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:53.166365  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.166440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.166743  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:53.490393  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:38:53.549126  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:38:53.552379  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.552413  483106 retry.go:31] will retry after 28.270272942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:38:53.666480  483106 type.go:168] "Request Body" body=""
	I1202 21:38:53.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:53.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.166899  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.166977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.167310  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:54.667106  483106 type.go:168] "Request Body" body=""
	I1202 21:38:54.667183  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:54.667452  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:54.667501  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:55.166711  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.166784  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.167096  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:55.666915  483106 type.go:168] "Request Body" body=""
	I1202 21:38:55.666986  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:55.667321  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.167141  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.167212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.167527  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:56.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:38:56.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:56.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:57.166258  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:57.166735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:57.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:38:57.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:57.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.167097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.167360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:58.667123  483106 type.go:168] "Request Body" body=""
	I1202 21:38:58.667203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:58.667560  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:38:59.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.166930  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:38:59.166985  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:38:59.666233  483106 type.go:168] "Request Body" body=""
	I1202 21:38:59.666305  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:38:59.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.166345  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.166424  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.166735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:00.666605  483106 type.go:168] "Request Body" body=""
	I1202 21:39:00.666696  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:00.667071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:01.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.166920  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.167258  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:01.167303  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:01.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:01.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:01.667514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.166308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:02.666901  483106 type.go:168] "Request Body" body=""
	I1202 21:39:02.666977  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:02.667267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:03.167047  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.167126  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.167463  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:03.167519  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:03.667138  483106 type.go:168] "Request Body" body=""
	I1202 21:39:03.667208  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:03.667536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.166363  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.166711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:04.666264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:04.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:04.666699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.166480  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.166807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:05.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:39:05.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:05.666607  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:05.666654  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:06.166221  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:06.666253  483106 type.go:168] "Request Body" body=""
	I1202 21:39:06.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:06.666658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.166933  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.167016  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.167275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:07.476950  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:07.537734  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:07.540988  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.541021  483106 retry.go:31] will retry after 43.142584555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:39:07.666246  483106 type.go:168] "Request Body" body=""
	I1202 21:39:07.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:07.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:07.666721  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:08.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.166806  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:08.666497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:08.666561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:08.666831  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.166990  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.167081  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.167424  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:09.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:39:09.666233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:09.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:10.166170  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.166240  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.166510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:10.166560  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:10.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:39:10.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:10.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.166219  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.166300  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.166624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:11.666147  483106 type.go:168] "Request Body" body=""
	I1202 21:39:11.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:11.666484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:12.166223  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.166293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.166617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:12.166680  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:12.666258  483106 type.go:168] "Request Body" body=""
	I1202 21:39:12.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:12.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.167106  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.167177  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.167479  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:13.666184  483106 type.go:168] "Request Body" body=""
	I1202 21:39:13.666262  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:13.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:14.166400  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.166473  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:14.166879  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:14.666975  483106 type.go:168] "Request Body" body=""
	I1202 21:39:14.667061  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:14.667380  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.167173  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.167254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.167549  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:15.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:39:15.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:15.666659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.166211  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.166592  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:16.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:16.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:16.666667  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:17.166399  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.166790  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:17.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:39:17.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:17.666629  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.166356  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.166694  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:18.666433  483106 type.go:168] "Request Body" body=""
	I1202 21:39:18.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:18.666858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:18.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:19.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.167267  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:19.667092  483106 type.go:168] "Request Body" body=""
	I1202 21:39:19.667166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:19.667486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.166275  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.166627  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:20.666762  483106 type.go:168] "Request Body" body=""
	I1202 21:39:20.666831  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:20.667148  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:20.667207  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:21.166923  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.167030  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.167353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.667178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:21.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:21.667576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:21.822959  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:39:21.878670  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878722  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:21.878822  483106 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:22.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.167188  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:22.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:39:22.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:22.666649  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:23.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.166385  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.166692  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:23.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:23.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:23.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:23.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.166668  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.166744  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.167080  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:24.666918  483106 type.go:168] "Request Body" body=""
	I1202 21:39:24.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:24.667347  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:25.166732  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.166798  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.167094  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:25.167141  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:25.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:39:25.666992  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:25.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.167051  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.167153  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.167485  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:26.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:39:26.666270  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:26.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.166562  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:27.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:39:27.666353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:27.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:27.666775  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:28.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.166268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:28.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:28.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:28.666620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.166566  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.166638  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.166966  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:29.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:39:29.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:29.666571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:30.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:30.166748  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:30.666468  483106 type.go:168] "Request Body" body=""
	I1202 21:39:30.666548  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:30.666896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.166188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.166269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.166537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:31.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:39:31.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:31.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:32.166401  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.166483  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.166797  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:32.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:32.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:39:32.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:32.666570  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.166360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:33.666426  483106 type.go:168] "Request Body" body=""
	I1202 21:39:33.666501  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:33.666838  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:34.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.166641  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.166906  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:34.166954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:34.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:39:34.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:34.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.166396  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:35.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:39:35.667133  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:35.667396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:36.167160  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.167234  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.167571  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:36.167629  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:36.666296  483106 type.go:168] "Request Body" body=""
	I1202 21:39:36.666373  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:36.666715  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.167008  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.167074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.167365  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:37.667188  483106 type.go:168] "Request Body" body=""
	I1202 21:39:37.667263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:37.667557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.166608  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:38.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:39:38.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:38.666617  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:38.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:39.166799  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.166866  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.167214  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:39.666873  483106 type.go:168] "Request Body" body=""
	I1202 21:39:39.666945  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:39.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.166544  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:40.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:39:40.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:40.666645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:40.666705  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:41.166392  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.166467  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.166820  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:41.667109  483106 type.go:168] "Request Body" body=""
	I1202 21:39:41.667193  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:41.667456  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.166286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.166704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:42.666430  483106 type.go:168] "Request Body" body=""
	I1202 21:39:42.666507  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:42.666850  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:42.666912  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:43.166126  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.166198  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.166502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:43.666218  483106 type.go:168] "Request Body" body=""
	I1202 21:39:43.666290  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:43.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.166582  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.166676  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.167019  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:44.666769  483106 type.go:168] "Request Body" body=""
	I1202 21:39:44.666837  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:44.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:44.667165  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:45.167137  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.167219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.167616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:45.666336  483106 type.go:168] "Request Body" body=""
	I1202 21:39:45.666407  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:45.666753  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.166833  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.166918  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.167201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:46.666991  483106 type.go:168] "Request Body" body=""
	I1202 21:39:46.667084  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:46.667426  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:46.667487  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:47.166178  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.166572  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:47.666176  483106 type.go:168] "Request Body" body=""
	I1202 21:39:47.666257  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:47.666519  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.166237  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.166320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.166668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:48.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:39:48.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:48.666685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:49.166761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.167141  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:49.167190  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:49.667042  483106 type.go:168] "Request Body" body=""
	I1202 21:39:49.667119  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:49.667437  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.166247  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.666738  483106 type.go:168] "Request Body" body=""
	I1202 21:39:50.666823  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:50.667106  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:50.684445  483106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:39:50.752913  483106 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.752959  483106 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:39:50.753053  483106 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:39:50.754872  483106 out.go:179] * Enabled addons: 
	I1202 21:39:50.756298  483106 addons.go:530] duration metric: took 1m51.620061888s for enable addons: enabled=[]
	I1202 21:39:51.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.166426  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.166756  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:51.666472  483106 type.go:168] "Request Body" body=""
	I1202 21:39:51.666542  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:51.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:51.666948  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:52.167023  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.167094  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:52.666886  483106 type.go:168] "Request Body" body=""
	I1202 21:39:52.666958  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:52.667302  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.167134  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.167525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:53.667118  483106 type.go:168] "Request Body" body=""
	I1202 21:39:53.667191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:53.667443  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:53.667482  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:54.166580  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.166653  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.166971  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:54.666761  483106 type.go:168] "Request Body" body=""
	I1202 21:39:54.666832  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:54.667157  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.166643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:55.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:39:55.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:55.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:56.166424  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.166496  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:56.166886  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:56.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:39:56.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:56.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.166233  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.166334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.166658  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:57.666359  483106 type.go:168] "Request Body" body=""
	I1202 21:39:57.666436  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:57.666730  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.166410  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.166495  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.166819  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:58.666570  483106 type.go:168] "Request Body" body=""
	I1202 21:39:58.666669  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:58.667123  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:39:58.667176  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:39:59.166497  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.166577  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:39:59.667069  483106 type.go:168] "Request Body" body=""
	I1202 21:39:59.667137  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:39:59.667455  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.166590  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.166967  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:00.666805  483106 type.go:168] "Request Body" body=""
	I1202 21:40:00.666883  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:00.667412  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:00.667479  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:01.166600  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.166671  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.167071  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:01.666865  483106 type.go:168] "Request Body" body=""
	I1202 21:40:01.666943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:01.667324  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.167126  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.167206  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.167585  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:02.666196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:02.666266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:02.666525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:03.166226  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.166298  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.166603  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:03.166657  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:03.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:40:03.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:03.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.166563  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.166827  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:04.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:40:04.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:04.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:05.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.166503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.166802  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:05.166854  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:05.666560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:05.666632  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:05.666917  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.166784  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.166862  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.167188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:06.666980  483106 type.go:168] "Request Body" body=""
	I1202 21:40:06.667073  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:06.667410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:07.167168  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.167242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.167577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:07.167637  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:07.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:07.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.166347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:08.666848  483106 type.go:168] "Request Body" body=""
	I1202 21:40:08.666917  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:08.667201  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.167118  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.167192  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.167533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:09.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:09.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:09.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:09.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:10.166218  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.166297  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.166630  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:10.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:40:10.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:10.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.166230  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.166652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:11.666139  483106 type.go:168] "Request Body" body=""
	I1202 21:40:11.666209  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:11.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:12.166254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.166674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:12.166731  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:12.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:12.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:12.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.166378  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.166445  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.166702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:13.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:13.666337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:13.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:14.166684  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.166770  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.167156  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:14.167223  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:14.666896  483106 type.go:168] "Request Body" body=""
	I1202 21:40:14.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:14.667255  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.167098  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.167171  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.167589  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:15.666317  483106 type.go:168] "Request Body" body=""
	I1202 21:40:15.666392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:15.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:16.166898  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.166964  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.167280  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:16.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:16.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:40:16.667212  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:16.667594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.166183  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.166578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:17.666227  483106 type.go:168] "Request Body" body=""
	I1202 21:40:17.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:17.666643  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.166363  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.166741  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:18.666473  483106 type.go:168] "Request Body" body=""
	I1202 21:40:18.666544  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:18.666888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:18.666946  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:19.166811  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.166894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.167197  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:19.667052  483106 type.go:168] "Request Body" body=""
	I1202 21:40:19.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:19.667494  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.166251  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.166656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:20.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:40:20.666278  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:20.666536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:21.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:21.166718  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:21.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:21.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:21.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.166154  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.166236  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.166525  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:22.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:40:22.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:22.666654  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:23.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.166350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.166696  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:23.166758  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:23.667064  483106 type.go:168] "Request Body" body=""
	I1202 21:40:23.667131  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:23.667404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.166423  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.166514  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.166938  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:24.666515  483106 type.go:168] "Request Body" body=""
	I1202 21:40:24.666591  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:24.666926  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.166167  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:25.666268  483106 type.go:168] "Request Body" body=""
	I1202 21:40:25.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:25.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:25.666738  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:26.166284  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.166386  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.166758  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:26.667125  483106 type.go:168] "Request Body" body=""
	I1202 21:40:26.667194  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:26.667482  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.166187  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.166261  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.166601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:27.666179  483106 type.go:168] "Request Body" body=""
	I1202 21:40:27.666248  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:27.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:28.166873  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.166943  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.167276  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:28.167335  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:28.667149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:28.667219  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:28.667624  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.166678  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.167031  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:29.666202  483106 type.go:168] "Request Body" body=""
	I1202 21:40:29.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:29.666578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.166296  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.166722  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:30.666438  483106 type.go:168] "Request Body" body=""
	I1202 21:40:30.666516  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:30.666818  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:30.666863  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:31.167130  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.167203  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.167472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:31.666847  483106 type.go:168] "Request Body" body=""
	I1202 21:40:31.666919  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:31.667279  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.167093  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.167163  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.167483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:32.666708  483106 type.go:168] "Request Body" body=""
	I1202 21:40:32.666786  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:32.667188  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:32.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:33.166964  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.167053  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.167388  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:33.666150  483106 type.go:168] "Request Body" body=""
	I1202 21:40:33.666225  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:33.666552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.166580  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:34.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:40:34.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:34.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:35.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.166672  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:35.166733  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:35.667033  483106 type.go:168] "Request Body" body=""
	I1202 21:40:35.667102  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:35.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.167161  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:36.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:40:36.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:36.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.166210  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.166552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:37.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:40:37.666343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:37.666698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:37.666757  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:38.166422  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.166500  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.166829  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:38.666194  483106 type.go:168] "Request Body" body=""
	I1202 21:40:38.666265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:38.666533  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.167095  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:39.666900  483106 type.go:168] "Request Body" body=""
	I1202 21:40:39.666974  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:39.667318  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:39.667375  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:40.167120  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.167543  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:40.666231  483106 type.go:168] "Request Body" body=""
	I1202 21:40:40.666308  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:40.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.166425  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.166750  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:41.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:40:41.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:41.666605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:42.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.166619  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:42.167094  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:42.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:40:42.666923  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:42.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.167057  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.167134  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.167398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:43.667173  483106 type.go:168] "Request Body" body=""
	I1202 21:40:43.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:43.667599  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.166501  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.166575  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.166892  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:44.666149  483106 type.go:168] "Request Body" body=""
	I1202 21:40:44.666222  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:44.666488  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:44.666529  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:45.166301  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.166394  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.166815  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:45.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:45.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:45.666688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.166383  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.166726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:46.666288  483106 type.go:168] "Request Body" body=""
	I1202 21:40:46.666390  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:46.666823  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:46.666883  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:47.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:47.666906  483106 type.go:168] "Request Body" body=""
	I1202 21:40:47.666980  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:47.667259  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.167160  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.167539  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:48.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:49.166560  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.166634  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.166898  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:49.166951  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:49.666759  483106 type.go:168] "Request Body" body=""
	I1202 21:40:49.666827  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:49.667195  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.167097  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.167180  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.167561  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:50.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:40:50.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:50.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.166662  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:51.666376  483106 type.go:168] "Request Body" body=""
	I1202 21:40:51.666454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:51.666782  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:51.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:52.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.166277  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:52.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:40:52.666260  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:52.666596  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.166242  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.166586  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:53.666280  483106 type.go:168] "Request Body" body=""
	I1202 21:40:53.666347  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:53.666611  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:54.166666  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.166740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.167107  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:54.167169  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:54.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:40:54.667066  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:54.667453  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.166768  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.166843  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.167212  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:55.667075  483106 type.go:168] "Request Body" body=""
	I1202 21:40:55.667147  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:55.667476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.166196  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:56.666907  483106 type.go:168] "Request Body" body=""
	I1202 21:40:56.666978  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:56.667341  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:56.667400  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:57.167105  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.167182  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.167548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:57.666151  483106 type.go:168] "Request Body" body=""
	I1202 21:40:57.666224  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:57.666574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.166605  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:58.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:40:58.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:58.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:40:59.166616  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.166687  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.167061  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:40:59.167133  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:40:59.666436  483106 type.go:168] "Request Body" body=""
	I1202 21:40:59.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:40:59.666763  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.166322  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.166433  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.166775  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:00.666772  483106 type.go:168] "Request Body" body=""
	I1202 21:41:00.666864  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:00.667256  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.166511  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.166588  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.166874  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:01.666242  483106 type.go:168] "Request Body" body=""
	I1202 21:41:01.666312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:01.666652  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:01.666713  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:02.166240  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.166324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.166701  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:02.666821  483106 type.go:168] "Request Body" body=""
	I1202 21:41:02.666894  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:02.667219  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.167019  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.167098  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.167404  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:03.667108  483106 type.go:168] "Request Body" body=""
	I1202 21:41:03.667179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:03.667509  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:03.667571  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:04.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.166539  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:04.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:41:04.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:04.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:05.666387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:05.666456  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:05.666711  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:06.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.166337  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:06.166736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:06.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:41:06.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:06.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.166352  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.166429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:07.666257  483106 type.go:168] "Request Body" body=""
	I1202 21:41:07.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:07.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.166256  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.166330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.166638  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:08.666213  483106 type.go:168] "Request Body" body=""
	I1202 21:41:08.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:08.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:08.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:09.166897  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.166972  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.167350  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:09.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:41:09.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:09.667559  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.166198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.166610  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:10.666259  483106 type.go:168] "Request Body" body=""
	I1202 21:41:10.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:10.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:11.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.166812  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:11.166864  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:11.667095  483106 type.go:168] "Request Body" body=""
	I1202 21:41:11.667159  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:11.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.167205  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.167279  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.167635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:12.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:41:12.666361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:12.666734  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.166554  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:13.666237  483106 type.go:168] "Request Body" body=""
	I1202 21:41:13.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:13.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:13.666743  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:14.166756  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.166839  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.167224  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:14.666384  483106 type.go:168] "Request Body" body=""
	I1202 21:41:14.666452  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:14.666765  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.166506  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.166604  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.167025  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:15.666880  483106 type.go:168] "Request Body" body=""
	I1202 21:41:15.666953  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:15.667301  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:15.667360  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:16.167103  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.167186  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.167467  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:16.666185  483106 type.go:168] "Request Body" body=""
	I1202 21:41:16.666259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:16.666581  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.166317  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.166400  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.166698  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:17.666368  483106 type.go:168] "Request Body" body=""
	I1202 21:41:17.666435  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:17.666759  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:18.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.166336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.166659  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:18.166712  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:18.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:18.666316  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:18.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.166731  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.166992  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:19.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:19.666925  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:19.667275  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:20.167102  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.167179  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.167552  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:20.167610  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:41:20.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:20.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.166282  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.166361  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.166713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:21.666428  483106 type.go:168] "Request Body" body=""
	I1202 21:41:21.666503  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:21.666878  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.166118  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.166189  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.166472  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:22.666186  483106 type.go:168] "Request Body" body=""
	I1202 21:41:22.666263  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:22.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:22.666636  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:23.166387  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.166458  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.166817  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:23.666524  483106 type.go:168] "Request Body" body=""
	I1202 21:41:23.666616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:23.666974  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.166861  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.166944  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.167295  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:24.667130  483106 type.go:168] "Request Body" body=""
	I1202 21:41:24.667205  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:24.667569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:24.667625  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:25.166285  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.166367  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.166640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:25.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:41:25.666324  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:25.666656  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.166431  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.166504  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.166839  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:26.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:26.666268  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:26.666590  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:27.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.166688  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:27.166741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:27.666269  483106 type.go:168] "Request Body" body=""
	I1202 21:41:27.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:27.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.166370  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.166448  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.166720  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:28.666254  483106 type.go:168] "Request Body" body=""
	I1202 21:41:28.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:28.666614  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:29.166581  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.166988  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:29.167064  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:29.666310  483106 type.go:168] "Request Body" body=""
	I1202 21:41:29.666379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:29.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.166344  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:30.666407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:30.666837  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.166591  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:31.666262  483106 type.go:168] "Request Body" body=""
	I1202 21:41:31.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:31.666700  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:31.666773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:32.166263  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.166335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.166666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:32.666931  483106 type.go:168] "Request Body" body=""
	I1202 21:41:32.667021  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:32.667367  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.167169  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.167238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.167574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:33.666283  483106 type.go:168] "Request Body" body=""
	I1202 21:41:33.666354  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:33.666664  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:34.166448  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.166521  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.166778  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:34.166817  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:34.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:41:34.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:34.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.166425  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.166518  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.166928  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:35.666141  483106 type.go:168] "Request Body" body=""
	I1202 21:41:35.666213  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:35.666489  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.166173  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.166250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.166587  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:36.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:41:36.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:36.666706  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:36.666759  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:37.166409  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.166478  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.166748  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:37.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:41:37.666371  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:37.666690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.166380  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.166453  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:38.666156  483106 type.go:168] "Request Body" body=""
	I1202 21:41:38.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:38.666498  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:39.166531  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.166607  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.166922  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:39.166975  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:39.666290  483106 type.go:168] "Request Body" body=""
	I1202 21:41:39.666360  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:39.666641  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.166314  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.166383  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.166661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:40.666295  483106 type.go:168] "Request Body" body=""
	I1202 21:41:40.666370  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:40.666709  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.166407  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.166482  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.166800  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:41.666481  483106 type.go:168] "Request Body" body=""
	I1202 21:41:41.666552  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:41.666826  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:41.666867  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:42.166504  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.166597  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.167020  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:42.666855  483106 type.go:168] "Request Body" body=""
	I1202 21:41:42.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:42.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.166575  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.166655  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.166923  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:43.666265  483106 type.go:168] "Request Body" body=""
	I1202 21:41:43.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:43.666713  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:44.166680  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.166751  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.167102  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:44.167158  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:44.666373  483106 type.go:168] "Request Body" body=""
	I1202 21:41:44.666442  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:44.666712  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.166323  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.166419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.166904  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:45.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:45.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:45.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:46.166971  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.167358  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:46.167415  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:46.667180  483106 type.go:168] "Request Body" body=""
	I1202 21:41:46.667250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:46.667573  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.166272  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.166353  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.166671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:47.666144  483106 type.go:168] "Request Body" body=""
	I1202 21:41:47.666220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:47.666481  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.166246  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.166328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:48.666212  483106 type.go:168] "Request Body" body=""
	I1202 21:41:48.666285  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:48.666616  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:48.666674  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:49.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.166829  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.167114  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:49.666912  483106 type.go:168] "Request Body" body=""
	I1202 21:41:49.667008  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:49.667343  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.167182  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.167265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.167597  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:50.666842  483106 type.go:168] "Request Body" body=""
	I1202 21:41:50.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:50.667199  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:50.667239  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:51.167085  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.167158  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.167484  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:51.666198  483106 type.go:168] "Request Body" body=""
	I1202 21:41:51.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:51.666588  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.166203  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.166288  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.166576  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:52.666279  483106 type.go:168] "Request Body" body=""
	I1202 21:41:52.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:52.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:53.166266  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.166682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:53.166739  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:53.666197  483106 type.go:168] "Request Body" body=""
	I1202 21:41:53.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:53.666538  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.166529  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.166605  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.167128  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:54.666895  483106 type.go:168] "Request Body" body=""
	I1202 21:41:54.666973  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:54.667337  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:55.167110  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.167191  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.167497  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:55.167547  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:55.666230  483106 type.go:168] "Request Body" body=""
	I1202 21:41:55.666304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:55.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.166312  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.166655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:56.666335  483106 type.go:168] "Request Body" body=""
	I1202 21:41:56.666403  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:56.666666  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.166298  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.166382  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.166769  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:57.666462  483106 type.go:168] "Request Body" body=""
	I1202 21:41:57.666534  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:57.666859  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:41:57.666917  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:41:58.166199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.166556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:58.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:41:58.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:58.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.167054  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.167410  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:41:59.666199  483106 type.go:168] "Request Body" body=""
	I1202 21:41:59.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:41:59.666594  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:00.166347  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.166428  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.166764  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:00.166812  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:00.666486  483106 type.go:168] "Request Body" body=""
	I1202 21:42:00.666567  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:00.666939  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.166699  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.166771  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.167072  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:01.666854  483106 type.go:168] "Request Body" body=""
	I1202 21:42:01.666927  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:01.667287  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:02.166943  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.167041  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.167384  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:02.167439  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:02.667159  483106 type.go:168] "Request Body" body=""
	I1202 21:42:02.667231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:02.667496  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.166171  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.166244  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.166536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:03.666335  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:03.666647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.166621  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.166698  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.166972  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:04.666792  483106 type.go:168] "Request Body" body=""
	I1202 21:42:04.666871  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:04.667225  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:04.667298  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:05.167086  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.167164  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.167486  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:05.666753  483106 type.go:168] "Request Body" body=""
	I1202 21:42:05.666818  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:05.667100  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.166887  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.166962  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.167288  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:06.666965  483106 type.go:168] "Request Body" body=""
	I1202 21:42:06.667059  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:06.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:06.667427  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:07.166619  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.166695  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.166958  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:07.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:42:07.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:07.666703  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.166283  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.166359  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.166689  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:08.666206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:08.666293  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:08.666583  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:09.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.166681  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:09.167077  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:09.666840  483106 type.go:168] "Request Body" body=""
	I1202 21:42:09.666912  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:09.667238  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.166509  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.166582  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.166858  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:10.666260  483106 type.go:168] "Request Body" body=""
	I1202 21:42:10.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:10.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.166374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.166742  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:11.666945  483106 type.go:168] "Request Body" body=""
	I1202 21:42:11.667031  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:11.667356  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:11.667420  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:12.167101  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.167190  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.167544  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:12.666215  483106 type.go:168] "Request Body" body=""
	I1202 21:42:12.666296  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:12.666600  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.166981  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.167068  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:13.667199  483106 type.go:168] "Request Body" body=""
	I1202 21:42:13.667286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:13.667642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:13.667698  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:14.166489  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.166888  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:14.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:42:14.666274  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:14.666551  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.166244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.166321  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.166657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:15.666366  483106 type.go:168] "Request Body" body=""
	I1202 21:42:15.666440  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:15.666760  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:16.166141  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.166215  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.166468  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:16.166510  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:16.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:42:16.666327  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:16.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.166374  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.166457  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.166761  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:17.666442  483106 type.go:168] "Request Body" body=""
	I1202 21:42:17.666512  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:17.666821  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:18.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.166772  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:18.166836  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:18.666274  483106 type.go:168] "Request Body" body=""
	I1202 21:42:18.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:18.666682  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.166855  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.166933  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.167216  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:19.666947  483106 type.go:168] "Request Body" body=""
	I1202 21:42:19.667039  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:19.667360  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:20.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.167228  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.167569  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:20.167623  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:20.666270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:20.666348  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:20.666615  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.166229  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.166302  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.166625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:21.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:21.666349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:21.666674  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.166193  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.166517  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:22.666240  483106 type.go:168] "Request Body" body=""
	I1202 21:42:22.666319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:22.666635  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:22.666694  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:23.166273  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.166352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.166714  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:23.666152  483106 type.go:168] "Request Body" body=""
	I1202 21:42:23.666229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:23.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.166488  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.166564  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.166896  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:24.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:24.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:24.666668  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:24.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:25.166205  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.166281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.166557  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:25.666239  483106 type.go:168] "Request Body" body=""
	I1202 21:42:25.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:25.666628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.166354  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.166432  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.166768  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:26.666459  483106 type.go:168] "Request Body" body=""
	I1202 21:42:26.666527  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:26.666814  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:26.666855  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:27.166267  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.166728  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:27.666682  483106 type.go:168] "Request Body" body=""
	I1202 21:42:27.666756  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:27.667083  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.166832  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.166910  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.167202  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:28.667022  483106 type.go:168] "Request Body" body=""
	I1202 21:42:28.667097  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:28.667414  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:28.667472  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:29.166585  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.166657  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.166986  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:29.666238  483106 type.go:168] "Request Body" body=""
	I1202 21:42:29.666306  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:29.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.166270  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.166349  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.166687  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:30.666416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:30.666494  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:30.667129  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:31.166416  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.166493  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:31.166799  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:31.666451  483106 type.go:168] "Request Body" body=""
	I1202 21:42:31.666540  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:31.666886  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.166604  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.166679  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.167040  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:32.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:42:32.666339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:32.666604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.166343  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.166414  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.166757  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:33.666470  483106 type.go:168] "Request Body" body=""
	I1202 21:42:33.666546  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:33.666897  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:33.666954  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:34.166602  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.166668  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.166925  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:34.666244  483106 type.go:168] "Request Body" body=""
	I1202 21:42:34.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:34.666642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.166346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.166669  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:35.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:42:35.666238  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:35.666502  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:36.166255  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:36.166744  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:36.666417  483106 type.go:168] "Request Body" body=""
	I1202 21:42:36.666492  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:36.666845  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.166502  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.166593  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.166951  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:37.666782  483106 type.go:168] "Request Body" body=""
	I1202 21:42:37.666857  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:37.667204  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:38.167040  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.167135  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.167508  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:38.167570  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:38.666773  483106 type.go:168] "Request Body" body=""
	I1202 21:42:38.666845  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:38.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.167094  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.167166  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.167513  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:39.667211  483106 type.go:168] "Request Body" body=""
	I1202 21:42:39.667304  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:39.667685  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.166206  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.166280  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.166574  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:40.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:42:40.666272  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:40.666606  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:40.666658  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:41.166208  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.166283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.166634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:41.666331  483106 type.go:168] "Request Body" body=""
	I1202 21:42:41.666404  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:41.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.166257  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.166340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.166751  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:42.666272  483106 type.go:168] "Request Body" body=""
	I1202 21:42:42.666346  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:42.666683  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:42.666736  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:43.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.166460  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.166745  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:43.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:42:43.666331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:43.666665  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.166537  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.166616  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.166962  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:44.666281  483106 type.go:168] "Request Body" body=""
	I1202 21:42:44.666350  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:44.666626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:45.166336  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.166423  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.166767  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:45.166816  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:45.666819  483106 type.go:168] "Request Body" body=""
	I1202 21:42:45.666897  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:45.667261  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.166500  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.166583  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.166847  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:46.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:46.666315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:46.666679  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:47.166414  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.166497  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:47.166838  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:47.666485  483106 type.go:168] "Request Body" body=""
	I1202 21:42:47.666557  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:47.666832  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.166264  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.166343  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.166678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:48.666261  483106 type.go:168] "Request Body" body=""
	I1202 21:42:48.666336  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:48.666684  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:49.166554  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.166635  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.166960  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:49.167054  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:49.666877  483106 type.go:168] "Request Body" body=""
	I1202 21:42:49.666951  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:49.667292  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.167131  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.167207  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.167578  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:50.666932  483106 type.go:168] "Request Body" body=""
	I1202 21:42:50.667019  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:50.667326  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:51.167186  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.167276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.167691  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:51.167754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:51.666431  483106 type.go:168] "Request Body" body=""
	I1202 21:42:51.666506  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:51.666825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.166160  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.166241  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.166511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:52.666241  483106 type.go:168] "Request Body" body=""
	I1202 21:42:52.666313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:52.666661  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.166381  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.166466  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.166825  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:53.667113  483106 type.go:168] "Request Body" body=""
	I1202 21:42:53.667187  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:53.667483  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:53.667539  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:54.166519  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.166598  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.166946  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:54.666794  483106 type.go:168] "Request Body" body=""
	I1202 21:42:54.666869  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:54.667190  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.166481  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.166549  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.166809  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:55.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:42:55.666325  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:55.666671  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:56.166359  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.166437  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.166777  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:56.166834  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:56.666178  483106 type.go:168] "Request Body" body=""
	I1202 21:42:56.666250  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.166224  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.166303  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.166628  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:57.666245  483106 type.go:168] "Request Body" body=""
	I1202 21:42:57.666323  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:57.666640  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.166169  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.166239  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.166503  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:58.666190  483106 type.go:168] "Request Body" body=""
	I1202 21:42:58.666269  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:58.666602  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:42:58.666661  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:42:59.166757  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.166838  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.167155  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:42:59.666449  483106 type.go:168] "Request Body" body=""
	I1202 21:42:59.666515  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:42:59.666860  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.166309  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.166395  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.166721  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:00.666575  483106 type.go:168] "Request Body" body=""
	I1202 21:43:00.666682  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:00.667068  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:00.667126  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:01.166853  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.167038  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.167371  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:01.667186  483106 type.go:168] "Request Body" body=""
	I1202 21:43:01.667265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:01.667601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.166238  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.166322  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:02.666979  483106 type.go:168] "Request Body" body=""
	I1202 21:43:02.667074  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:02.667353  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:02.667401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:03.167145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.167221  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.167567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:03.666255  483106 type.go:168] "Request Body" body=""
	I1202 21:43:03.666326  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:03.666639  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.166598  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.166767  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:04.667023  483106 type.go:168] "Request Body" body=""
	I1202 21:43:04.667100  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:04.667434  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:04.667488  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:05.166177  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.166259  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.166604  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:05.666866  483106 type.go:168] "Request Body" body=""
	I1202 21:43:05.666932  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:05.667249  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.167087  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.167170  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.167507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:06.666273  483106 type.go:168] "Request Body" body=""
	I1202 21:43:06.666345  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:06.666702  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:07.166389  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.166454  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.166729  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:07.166773  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:07.666440  483106 type.go:168] "Request Body" body=""
	I1202 21:43:07.666529  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:07.666861  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.166628  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.166712  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.167093  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:08.666822  483106 type.go:168] "Request Body" body=""
	I1202 21:43:08.666890  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:08.667183  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:09.167074  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.167152  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.167512  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:09.167567  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:09.666271  483106 type.go:168] "Request Body" body=""
	I1202 21:43:09.666352  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:09.666710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.166961  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.167396  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:10.666160  483106 type.go:168] "Request Body" body=""
	I1202 21:43:10.666231  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:10.666547  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.166341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.166637  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:11.666393  483106 type.go:168] "Request Body" body=""
	I1202 21:43:11.666463  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:11.666766  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:11.666808  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:12.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.166331  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.166645  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:12.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:12.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:12.666717  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.166302  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.166379  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.166710  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:13.666287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:13.666374  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:13.666735  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:14.166633  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.166711  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.167091  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:14.167149  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:14.666871  483106 type.go:168] "Request Body" body=""
	I1202 21:43:14.666946  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:14.667269  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.167061  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.167138  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.167476  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:15.666203  483106 type.go:168] "Request Body" body=""
	I1202 21:43:15.666281  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:15.666622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.166164  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.166245  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.166507  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:16.666216  483106 type.go:168] "Request Body" body=""
	I1202 21:43:16.666286  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:16.666655  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:16.666726  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:17.166201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.166273  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.166577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:17.666191  483106 type.go:168] "Request Body" body=""
	I1202 21:43:17.666256  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:17.666511  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.166212  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.166315  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.166633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:18.666248  483106 type.go:168] "Request Body" body=""
	I1202 21:43:18.666318  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:18.666601  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:19.166505  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.166576  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.166870  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:19.166918  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:19.666249  483106 type.go:168] "Request Body" body=""
	I1202 21:43:19.666342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:19.666678  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.166333  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:20.666204  483106 type.go:168] "Request Body" body=""
	I1202 21:43:20.666276  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:20.666567  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.166357  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.166648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:21.666369  483106 type.go:168] "Request Body" body=""
	I1202 21:43:21.666443  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:21.666785  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:21.666840  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:22.166492  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.166561  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.166824  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:22.666293  483106 type.go:168] "Request Body" body=""
	I1202 21:43:22.666368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:22.666708  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.166281  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.166368  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.166699  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:23.666210  483106 type.go:168] "Request Body" body=""
	I1202 21:43:23.666283  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:23.666537  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:24.166569  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.166660  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.167035  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:24.167111  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:24.666850  483106 type.go:168] "Request Body" body=""
	I1202 21:43:24.666926  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:24.667230  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.166928  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.167024  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.167370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:25.667147  483106 type.go:168] "Request Body" body=""
	I1202 21:43:25.667223  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:25.667622  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.166220  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.166295  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.166620  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:26.666170  483106 type.go:168] "Request Body" body=""
	I1202 21:43:26.666243  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:26.666504  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:26.666554  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:27.166253  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.166332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.166660  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:27.666251  483106 type.go:168] "Request Body" body=""
	I1202 21:43:27.666330  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:27.666657  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.166197  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.166266  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.166524  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:28.666247  483106 type.go:168] "Request Body" body=""
	I1202 21:43:28.666332  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:28.666680  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:28.666735  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:29.166765  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.166840  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.167165  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:29.666897  483106 type.go:168] "Request Body" body=""
	I1202 21:43:29.666981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:29.667306  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.167174  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.167271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.167625  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:30.666334  483106 type.go:168] "Request Body" body=""
	I1202 21:43:30.666419  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:30.666807  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:30.666870  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:31.167152  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.167220  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.167536  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:31.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:31.666338  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:31.666704  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.166268  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.166351  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.166675  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:32.666217  483106 type.go:168] "Request Body" body=""
	I1202 21:43:32.666287  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:32.666548  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:33.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.166313  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.166650  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:33.166706  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:33.666243  483106 type.go:168] "Request Body" body=""
	I1202 21:43:33.666320  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:33.666648  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.166464  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.166799  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:34.666282  483106 type.go:168] "Request Body" body=""
	I1202 21:43:34.666375  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:34.666726  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.166319  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.166392  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.166686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:35.666145  483106 type.go:168] "Request Body" body=""
	I1202 21:43:35.666218  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:35.666514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:35.666568  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:36.166250  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.166319  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.166626  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:36.666324  483106 type.go:168] "Request Body" body=""
	I1202 21:43:36.666401  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:36.666725  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.166908  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.166975  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.167263  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:37.667047  483106 type.go:168] "Request Body" body=""
	I1202 21:43:37.667118  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:37.667398  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:37.667447  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:38.166151  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.166226  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.166528  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:38.666232  483106 type.go:168] "Request Body" body=""
	I1202 21:43:38.666314  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:38.666633  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.166658  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.166754  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.167075  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:39.666637  483106 type.go:168] "Request Body" body=""
	I1202 21:43:39.666714  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:39.667049  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:40.166341  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.166420  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.166681  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:40.166728  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:40.666385  483106 type.go:168] "Request Body" body=""
	I1202 21:43:40.666455  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:40.666787  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.166265  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.166670  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:41.666356  483106 type.go:168] "Request Body" body=""
	I1202 21:43:41.666429  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:41.666697  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:42.166327  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.166411  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.166822  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:42.166896  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:42.666589  483106 type.go:168] "Request Body" body=""
	I1202 21:43:42.666665  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:42.667015  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.166747  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.166812  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.167088  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:43.666863  483106 type.go:168] "Request Body" body=""
	I1202 21:43:43.666934  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:43.667289  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:44.166907  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.166981  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.167339  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:44.167397  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:44.666667  483106 type.go:168] "Request Body" body=""
	I1202 21:43:44.666740  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:44.667046  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.166921  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.167029  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.167441  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:45.666175  483106 type.go:168] "Request Body" body=""
	I1202 21:43:45.666253  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:45.666621  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.166176  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.166254  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.166514  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:46.666267  483106 type.go:168] "Request Body" body=""
	I1202 21:43:46.666362  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:46.666695  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:46.666754  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:47.166451  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.166530  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.166864  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:47.667182  483106 type.go:168] "Request Body" body=""
	I1202 21:43:47.667255  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:47.667579  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.166269  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.166342  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.166676  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:48.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:48.666341  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:48.666673  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:49.166748  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.166817  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.167193  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:49.167250  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:49.666922  483106 type.go:168] "Request Body" body=""
	I1202 21:43:49.667010  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:49.667370  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.166155  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.166229  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.166575  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:50.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:50.666900  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:50.667180  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:51.166962  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.167055  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.167345  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:51.167391  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:51.667137  483106 type.go:168] "Request Body" body=""
	I1202 21:43:51.667233  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:51.667577  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.166264  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.166564  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:52.666171  483106 type.go:168] "Request Body" body=""
	I1202 21:43:52.666249  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:52.666566  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.166287  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.166366  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.166690  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:53.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:53.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:53.666529  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:53.666576  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:54.166567  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.166645  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.167026  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:54.666829  483106 type.go:168] "Request Body" body=""
	I1202 21:43:54.666911  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:54.667510  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.166195  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.166265  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.166542  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:55.666263  483106 type.go:168] "Request Body" body=""
	I1202 21:43:55.666334  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:55.666651  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:55.666707  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:56.166239  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.166311  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.166642  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:56.666208  483106 type.go:168] "Request Body" body=""
	I1202 21:43:56.666282  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:56.666556  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.167073  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.167151  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.167546  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:57.666266  483106 type.go:168] "Request Body" body=""
	I1202 21:43:57.666340  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:57.666686  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:43:57.666741  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-066896": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:43:58.166390  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.166469  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.166793  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:58.666256  483106 type.go:168] "Request Body" body=""
	I1202 21:43:58.666328  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:58.666632  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.166260  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.166339  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.166647  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:43:59.666201  483106 type.go:168] "Request Body" body=""
	I1202 21:43:59.666271  483106 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-066896" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:43:59.666634  483106 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:44:00.169272  483106 type.go:168] "Request Body" body=""
	W1202 21:44:00.169401  483106 node_ready.go:55] error getting node "functional-066896" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 21:44:00.169464  483106 node_ready.go:38] duration metric: took 6m0.003439328s for node "functional-066896" to be "Ready" ...
	I1202 21:44:00.175124  483106 out.go:203] 
	W1202 21:44:00.178380  483106 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:44:00.178413  483106 out.go:285] * 
	W1202 21:44:00.180645  483106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:44:00.185151  483106 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.116158755Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=d6fd777b-1bb1-431e-9591-d4dc00e55d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.14108786Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2d7899ab-0792-488f-996b-e0a6c3e572ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.14124499Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2d7899ab-0792-488f-996b-e0a6c3e572ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:09 functional-066896 crio[6009]: time="2025-12-02T21:44:09.141297528Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2d7899ab-0792-488f-996b-e0a6c3e572ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.221592581Z" level=info msg="Checking image status: minikube-local-cache-test:functional-066896" id=af739e94-7318-459c-9400-e955cd157d81 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.244103008Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-066896" id=0cc2ed60-4726-4935-8c0e-4dc57d5842b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.244243505Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-066896 not found" id=0cc2ed60-4726-4935-8c0e-4dc57d5842b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.244284301Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-066896 found" id=0cc2ed60-4726-4935-8c0e-4dc57d5842b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.268139675Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-066896" id=65775498-df63-4d94-ba7a-92f31f974251 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.269264377Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-066896 not found" id=65775498-df63-4d94-ba7a-92f31f974251 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:10 functional-066896 crio[6009]: time="2025-12-02T21:44:10.269313346Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-066896 found" id=65775498-df63-4d94-ba7a-92f31f974251 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.0837311Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b3ab13c0-493e-44ab-baec-d0bff455f6aa name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.449999322Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7b013f7c-b914-41bd-ae4c-b6cde7cba10e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.450135126Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7b013f7c-b914-41bd-ae4c-b6cde7cba10e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:11 functional-066896 crio[6009]: time="2025-12-02T21:44:11.450170647Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7b013f7c-b914-41bd-ae4c-b6cde7cba10e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.02245815Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=165e6e8a-1493-488b-ae7c-7f0b491f4718 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.022600412Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=165e6e8a-1493-488b-ae7c-7f0b491f4718 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.022639624Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=165e6e8a-1493-488b-ae7c-7f0b491f4718 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.046927288Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5d358ad2-dbf8-483c-ba3f-3c2d28c998b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.047195328Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5d358ad2-dbf8-483c-ba3f-3c2d28c998b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.047237231Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5d358ad2-dbf8-483c-ba3f-3c2d28c998b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.072792262Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=06b87679-5259-4610-91e5-18f8083af0a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.073142829Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=06b87679-5259-4610-91e5-18f8083af0a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.073226111Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=06b87679-5259-4610-91e5-18f8083af0a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:44:12 functional-066896 crio[6009]: time="2025-12-02T21:44:12.634324701Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ff51799f-eb0a-4ede-80e4-d668c6b158e4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:44:16.597520   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:16.598062   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:16.599567   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:16.600131   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:44:16.601554   10109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:44:16 up  3:26,  0 user,  load average: 0.62, 0.31, 0.52
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:44:14 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:14 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 02 21:44:14 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:14 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:14 functional-066896 kubelet[9985]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:14 functional-066896 kubelet[9985]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:14 functional-066896 kubelet[9985]: E1202 21:44:14.999217    9985 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:15 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:15 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:15 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 02 21:44:15 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:15 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:15 functional-066896 kubelet[10019]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:15 functional-066896 kubelet[10019]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:15 functional-066896 kubelet[10019]: E1202 21:44:15.732389   10019 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:15 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:15 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:44:16 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 02 21:44:16 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:16 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:44:16 functional-066896 kubelet[10080]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:16 functional-066896 kubelet[10080]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:44:16 functional-066896 kubelet[10080]: E1202 21:44:16.478069   10080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:44:16 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:44:16 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (334.614932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-066896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 21:46:18.470465  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:48:42.593707  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:49:21.543213  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:50:05.667149  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:51:18.471190  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:53:42.597762  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:56:18.469668  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-066896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.375084923s)

                                                
                                                
-- stdout --
	* [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230264s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-066896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.376126262s for "functional-066896" cluster.
I1202 21:56:31.980839  447211 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (320.110191ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image   │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete  │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start   │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start   │ -p functional-066896 --alsologtostderr -v=8                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:37 UTC │                     │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:latest                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add minikube-local-cache-test:functional-066896                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache delete minikube-local-cache-test:functional-066896                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl images                                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ cache   │ functional-066896 cache reload                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ kubectl │ functional-066896 kubectl -- --context functional-066896 get pods                                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ start   │ -p functional-066896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:44:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:44:17.650988  488914 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:44:17.651127  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651131  488914 out.go:374] Setting ErrFile to fd 2...
	I1202 21:44:17.651134  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651388  488914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:44:17.651725  488914 out.go:368] Setting JSON to false
	I1202 21:44:17.652562  488914 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12386,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:44:17.652624  488914 start.go:143] virtualization:  
	I1202 21:44:17.655925  488914 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:44:17.658824  488914 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:44:17.658955  488914 notify.go:221] Checking for updates...
	I1202 21:44:17.664772  488914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:44:17.667672  488914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:44:17.670581  488914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:44:17.673492  488914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:44:17.676281  488914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:44:17.679520  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:17.679615  488914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:44:17.708368  488914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:44:17.708467  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.767956  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.759221256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.768046  488914 docker.go:319] overlay module found
	I1202 21:44:17.771104  488914 out.go:179] * Using the docker driver based on existing profile
	I1202 21:44:17.773889  488914 start.go:309] selected driver: docker
	I1202 21:44:17.773897  488914 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.773983  488914 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:44:17.774077  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.834934  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.825868601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.835402  488914 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:44:17.835426  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:17.835482  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:17.835523  488914 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.838587  488914 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:44:17.841458  488914 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:44:17.844370  488914 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:44:17.847200  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:17.847277  488914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:44:17.866587  488914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:44:17.866598  488914 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:44:17.909149  488914 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:44:18.073530  488914 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:44:18.073687  488914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:44:18.073803  488914 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073909  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:44:18.073917  488914 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.617µs
	I1202 21:44:18.073927  488914 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:44:18.073937  488914 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:44:18.073939  488914 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073964  488914 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073980  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:44:18.073986  488914 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.935µs
	I1202 21:44:18.073991  488914 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074001  488914 start.go:364] duration metric: took 25.551µs to acquireMachinesLock for "functional-066896"
	I1202 21:44:18.074000  488914 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074014  488914 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:44:18.074021  488914 fix.go:54] fixHost starting: 
	I1202 21:44:18.074029  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:44:18.074034  488914 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.037µs
	I1202 21:44:18.074039  488914 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074056  488914 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074084  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:44:18.074089  488914 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 41.329µs
	I1202 21:44:18.074093  488914 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074101  488914 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074151  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:44:18.074156  488914 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 55.623µs
	I1202 21:44:18.074160  488914 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074169  488914 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074193  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:44:18.074211  488914 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.457µs
	I1202 21:44:18.074217  488914 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:44:18.074232  488914 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074258  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:44:18.074262  488914 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 39.032µs
	I1202 21:44:18.074267  488914 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:44:18.074276  488914 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:44:18.074274  488914 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074311  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:44:18.074315  488914 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.174µs
	I1202 21:44:18.074320  488914 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:44:18.074327  488914 cache.go:87] Successfully saved all images to host disk.
	I1202 21:44:18.091506  488914 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:44:18.091527  488914 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:44:18.096748  488914 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:44:18.096772  488914 machine.go:94] provisionDockerMachine start ...
	I1202 21:44:18.096874  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.114456  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.114786  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.114793  488914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:44:18.266794  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.266809  488914 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:44:18.266875  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.286274  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.286575  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.286589  488914 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:44:18.448160  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.448232  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.466449  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.466766  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.466781  488914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:44:18.615365  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:44:18.615380  488914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:44:18.615404  488914 ubuntu.go:190] setting up certificates
	I1202 21:44:18.615412  488914 provision.go:84] configureAuth start
	I1202 21:44:18.615471  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:18.633069  488914 provision.go:143] copyHostCerts
	I1202 21:44:18.633141  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:44:18.633158  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:44:18.633234  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:44:18.633330  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:44:18.633334  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:44:18.633359  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:44:18.633406  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:44:18.633410  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:44:18.633430  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:44:18.633475  488914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:44:19.174279  488914 provision.go:177] copyRemoteCerts
	I1202 21:44:19.174331  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:44:19.174370  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.190978  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.294889  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:44:19.312628  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:44:19.330566  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:44:19.347713  488914 provision.go:87] duration metric: took 732.278587ms to configureAuth
	I1202 21:44:19.347730  488914 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:44:19.347935  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:19.348040  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.364877  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:19.365168  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:19.365182  488914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:44:19.733535  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:44:19.733548  488914 machine.go:97] duration metric: took 1.636769982s to provisionDockerMachine
	I1202 21:44:19.733558  488914 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:44:19.733570  488914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:44:19.733637  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:44:19.733700  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.752520  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.854929  488914 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:44:19.858053  488914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:44:19.858070  488914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:44:19.858080  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:44:19.858131  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:44:19.858206  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:44:19.858277  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:44:19.858317  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:44:19.865625  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:19.882511  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:44:19.899291  488914 start.go:296] duration metric: took 165.718396ms for postStartSetup
	I1202 21:44:19.899374  488914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:44:19.899409  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.915689  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.016990  488914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:44:20.022912  488914 fix.go:56] duration metric: took 1.948885968s for fixHost
	I1202 21:44:20.022943  488914 start.go:83] releasing machines lock for "functional-066896", held for 1.948933476s
	I1202 21:44:20.023059  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:20.041984  488914 ssh_runner.go:195] Run: cat /version.json
	I1202 21:44:20.042007  488914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:44:20.042033  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.042071  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.064148  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.064737  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.168080  488914 ssh_runner.go:195] Run: systemctl --version
	I1202 21:44:20.290437  488914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:44:20.326220  488914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:44:20.331076  488914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:44:20.331137  488914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:44:20.338791  488914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:44:20.338805  488914 start.go:496] detecting cgroup driver to use...
	I1202 21:44:20.338835  488914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:44:20.338881  488914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:44:20.354128  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:44:20.367183  488914 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:44:20.367236  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:44:20.383031  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:44:20.396225  488914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:44:20.505938  488914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:44:20.631853  488914 docker.go:234] disabling docker service ...
	I1202 21:44:20.631909  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:44:20.647481  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:44:20.660948  488914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:44:20.779859  488914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:44:20.901936  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:44:20.922332  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:44:20.937696  488914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:44:20.937766  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.947525  488914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:44:20.947591  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.956868  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.966757  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.976111  488914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:44:20.984116  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.993108  488914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.003934  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.015041  488914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:44:21.023179  488914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:44:21.030977  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.150076  488914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:44:21.327555  488914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:44:21.327622  488914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:44:21.331404  488914 start.go:564] Will wait 60s for crictl version
	I1202 21:44:21.331471  488914 ssh_runner.go:195] Run: which crictl
	I1202 21:44:21.335016  488914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:44:21.359060  488914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:44:21.359133  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.387110  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.420984  488914 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:44:21.423772  488914 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:44:21.440341  488914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:44:21.447237  488914 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:44:21.449900  488914 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:44:21.450046  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:21.450110  488914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:44:21.483620  488914 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:44:21.483631  488914 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:44:21.483637  488914 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:44:21.483726  488914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:44:21.483815  488914 ssh_runner.go:195] Run: crio config
	I1202 21:44:21.540157  488914 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:44:21.540183  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:21.540190  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:21.540200  488914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:44:21.540251  488914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:44:21.540412  488914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:44:21.540486  488914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:44:21.551296  488914 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:44:21.551378  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:44:21.559159  488914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:44:21.572470  488914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:44:21.586886  488914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 21:44:21.600852  488914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:44:21.604702  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.760401  488914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:44:22.412975  488914 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:44:22.412987  488914 certs.go:195] generating shared ca certs ...
	I1202 21:44:22.413002  488914 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:44:22.413155  488914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:44:22.413195  488914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:44:22.413201  488914 certs.go:257] generating profile certs ...
	I1202 21:44:22.413284  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:44:22.413360  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:44:22.413398  488914 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:44:22.413511  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:44:22.413543  488914 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:44:22.413552  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:44:22.413581  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:44:22.413604  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:44:22.413626  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:44:22.413674  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:22.414299  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:44:22.434951  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:44:22.453111  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:44:22.472098  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:44:22.493256  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:44:22.511523  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:44:22.529485  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:44:22.547667  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:44:22.565085  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:44:22.583650  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:44:22.601678  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:44:22.619263  488914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:44:22.631918  488914 ssh_runner.go:195] Run: openssl version
	I1202 21:44:22.638008  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:44:22.646246  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.649963  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.650030  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.691947  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:44:22.699744  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:44:22.707750  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711346  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711410  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.752553  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:44:22.760779  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:44:22.769102  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.772990  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.773054  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.817125  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:44:22.825521  488914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:44:22.829263  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:44:22.870268  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:44:22.912651  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:44:22.953793  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:44:22.994690  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:44:23.036128  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:44:23.077233  488914 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:23.077311  488914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:44:23.077384  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.104728  488914 cri.go:89] found id: ""
	I1202 21:44:23.104787  488914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:44:23.112693  488914 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:44:23.112702  488914 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:44:23.112754  488914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:44:23.120199  488914 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.120715  488914 kubeconfig.go:125] found "functional-066896" server: "https://192.168.49.2:8441"
	I1202 21:44:23.122004  488914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:44:23.129849  488914 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:29:46.719862797 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:44:21.596345133 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:44:23.129868  488914 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:44:23.129878  488914 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 21:44:23.129934  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.164567  488914 cri.go:89] found id: ""
	I1202 21:44:23.164629  488914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:44:23.192730  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:44:23.201193  488914 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5576 Dec  2 21:33 /etc/kubernetes/scheduler.conf
	
	I1202 21:44:23.201254  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:44:23.209100  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:44:23.217145  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.217201  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:44:23.224901  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.232713  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.232773  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.240473  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:44:23.248046  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.248102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:44:23.255508  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:44:23.263587  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:23.311842  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.167347  488914 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.855478015s)
	I1202 21:44:25.167416  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.367575  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.433420  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.478422  488914 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:44:25.478494  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:25.978693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.479461  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.978647  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.479295  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.979313  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.479548  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.979300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.478679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.479305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.979214  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.478682  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.979440  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.478676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.978971  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.478687  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.479399  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.978686  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.479541  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.979365  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.478985  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.978766  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.478652  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.979222  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.478642  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.979289  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.479367  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.978641  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.478896  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.479195  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.979035  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.478597  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.978688  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.478820  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.979413  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.478702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.979325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.478716  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.979514  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.479502  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.978679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.479602  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.978676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.979208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.479262  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.978947  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.478848  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.979340  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.478943  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.979631  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.479208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.978824  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.478692  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.978621  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.479381  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.979217  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.479300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.979309  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.478661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.978590  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.478589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.979149  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.479524  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.979613  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.979556  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.479181  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.479560  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.979258  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.478693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.979403  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.479145  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.979083  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.478795  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.979236  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.478753  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.479607  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.479438  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.978717  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.478907  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.979407  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.478991  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.979216  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.479168  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.979304  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.479589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.979207  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.478756  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.979408  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.979186  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.478671  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.979155  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.478781  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.478767  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.978709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.478610  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.979395  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.479136  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.978666  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.479565  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.978675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.979164  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.478675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.978579  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:25.479540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:25.479652  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:25.504711  488914 cri.go:89] found id: ""
	I1202 21:45:25.504725  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.504732  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:25.504738  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:25.504795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:25.529752  488914 cri.go:89] found id: ""
	I1202 21:45:25.529766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.529773  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:25.529778  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:25.529838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:25.555068  488914 cri.go:89] found id: ""
	I1202 21:45:25.555082  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.555089  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:25.555095  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:25.555154  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:25.583996  488914 cri.go:89] found id: ""
	I1202 21:45:25.584010  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.584017  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:25.584023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:25.584083  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:25.613039  488914 cri.go:89] found id: ""
	I1202 21:45:25.613053  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.613060  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:25.613065  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:25.613125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:25.638912  488914 cri.go:89] found id: ""
	I1202 21:45:25.638926  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.638933  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:25.638938  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:25.639016  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:25.663753  488914 cri.go:89] found id: ""
	I1202 21:45:25.663766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.663773  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:25.663781  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:25.663793  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:25.693023  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:25.693040  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:25.759763  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:25.759782  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:25.774658  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:25.774679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:25.838644  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:25.838656  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:25.838667  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.417551  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:28.428847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:28.428924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:28.461391  488914 cri.go:89] found id: ""
	I1202 21:45:28.461406  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.461413  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:28.461418  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:28.461487  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:28.493536  488914 cri.go:89] found id: ""
	I1202 21:45:28.493549  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.493556  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:28.493561  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:28.493625  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:28.521334  488914 cri.go:89] found id: ""
	I1202 21:45:28.521347  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.521354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:28.521360  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:28.521429  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:28.546459  488914 cri.go:89] found id: ""
	I1202 21:45:28.546472  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.546479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:28.546484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:28.546558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:28.573310  488914 cri.go:89] found id: ""
	I1202 21:45:28.573325  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.573332  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:28.573338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:28.573398  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:28.603231  488914 cri.go:89] found id: ""
	I1202 21:45:28.603245  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.603252  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:28.603259  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:28.603339  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:28.628995  488914 cri.go:89] found id: ""
	I1202 21:45:28.629009  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.629016  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:28.629024  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:28.629034  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:28.694293  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:28.694315  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:28.709309  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:28.709326  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:28.772742  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:28.772763  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:28.772775  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.851065  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:28.851099  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.383921  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:31.394465  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:31.394529  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:31.432030  488914 cri.go:89] found id: ""
	I1202 21:45:31.432046  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.432053  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:31.432061  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:31.432122  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:31.469314  488914 cri.go:89] found id: ""
	I1202 21:45:31.469327  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.469334  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:31.469339  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:31.469399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:31.495701  488914 cri.go:89] found id: ""
	I1202 21:45:31.495715  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.495721  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:31.495726  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:31.495783  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:31.525459  488914 cri.go:89] found id: ""
	I1202 21:45:31.525472  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.525479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:31.525484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:31.525548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:31.551543  488914 cri.go:89] found id: ""
	I1202 21:45:31.551557  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.551564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:31.551569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:31.551635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:31.576459  488914 cri.go:89] found id: ""
	I1202 21:45:31.576473  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.576479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:31.576485  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:31.576543  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:31.605711  488914 cri.go:89] found id: ""
	I1202 21:45:31.605726  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.605733  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:31.605741  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:31.605752  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.637077  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:31.637094  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:31.704571  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:31.704592  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:31.719615  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:31.719640  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:31.784987  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:31.785007  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:31.785019  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.367127  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:34.377127  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:34.377203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:34.402736  488914 cri.go:89] found id: ""
	I1202 21:45:34.402750  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.402757  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:34.402769  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:34.402864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:34.443728  488914 cri.go:89] found id: ""
	I1202 21:45:34.443742  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.443749  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:34.443754  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:34.443815  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:34.479956  488914 cri.go:89] found id: ""
	I1202 21:45:34.479970  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.479985  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:34.479991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:34.480055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:34.508482  488914 cri.go:89] found id: ""
	I1202 21:45:34.508503  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.508510  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:34.508516  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:34.508573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:34.534801  488914 cri.go:89] found id: ""
	I1202 21:45:34.534814  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.534821  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:34.534826  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:34.534884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:34.559463  488914 cri.go:89] found id: ""
	I1202 21:45:34.559477  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.559484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:34.559490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:34.559551  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:34.584528  488914 cri.go:89] found id: ""
	I1202 21:45:34.584543  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.584550  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:34.584557  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:34.584568  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:34.651241  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:34.651261  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:34.666228  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:34.666244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:34.728086  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:34.728108  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:34.728120  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.804348  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:34.804369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:37.332022  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:37.341829  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:37.341888  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:37.366064  488914 cri.go:89] found id: ""
	I1202 21:45:37.366078  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.366085  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:37.366090  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:37.366147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:37.395570  488914 cri.go:89] found id: ""
	I1202 21:45:37.395584  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.395590  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:37.395595  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:37.395663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:37.429125  488914 cri.go:89] found id: ""
	I1202 21:45:37.429140  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.429147  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:37.429161  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:37.429218  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:37.462030  488914 cri.go:89] found id: ""
	I1202 21:45:37.462054  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.462062  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:37.462080  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:37.462152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:37.490229  488914 cri.go:89] found id: ""
	I1202 21:45:37.490242  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.490260  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:37.490266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:37.490349  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:37.515496  488914 cri.go:89] found id: ""
	I1202 21:45:37.515510  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.515516  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:37.515522  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:37.515578  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:37.544546  488914 cri.go:89] found id: ""
	I1202 21:45:37.544560  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.544567  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:37.544575  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:37.544586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:37.617995  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:37.618023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:37.634282  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:37.634307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:37.704089  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:37.704099  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:37.704110  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:37.780382  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:37.780402  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.308261  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:40.318898  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:40.318954  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:40.351388  488914 cri.go:89] found id: ""
	I1202 21:45:40.351403  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.351409  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:40.351415  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:40.351476  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:40.376844  488914 cri.go:89] found id: ""
	I1202 21:45:40.376857  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.376864  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:40.376869  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:40.376927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:40.400732  488914 cri.go:89] found id: ""
	I1202 21:45:40.400745  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.400752  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:40.400757  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:40.400816  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:40.446048  488914 cri.go:89] found id: ""
	I1202 21:45:40.446061  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.446067  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:40.446075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:40.446134  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:40.475997  488914 cri.go:89] found id: ""
	I1202 21:45:40.476011  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.476018  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:40.476023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:40.476081  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:40.501615  488914 cri.go:89] found id: ""
	I1202 21:45:40.501629  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.501636  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:40.501642  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:40.501705  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:40.526763  488914 cri.go:89] found id: ""
	I1202 21:45:40.526809  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.526816  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:40.526831  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:40.526842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:40.542072  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:40.542088  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:40.603416  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:40.603427  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:40.603437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:40.683775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:40.683797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.710561  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:40.710577  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.275783  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:43.286075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:43.286135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:43.312011  488914 cri.go:89] found id: ""
	I1202 21:45:43.312026  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.312033  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:43.312039  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:43.312099  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:43.337316  488914 cri.go:89] found id: ""
	I1202 21:45:43.337330  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.337337  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:43.337359  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:43.337418  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:43.369627  488914 cri.go:89] found id: ""
	I1202 21:45:43.369641  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.369648  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:43.369653  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:43.369714  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:43.395672  488914 cri.go:89] found id: ""
	I1202 21:45:43.395686  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.395693  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:43.395698  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:43.395757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:43.436721  488914 cri.go:89] found id: ""
	I1202 21:45:43.436735  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.436742  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:43.436747  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:43.436808  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:43.468979  488914 cri.go:89] found id: ""
	I1202 21:45:43.468993  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.469008  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:43.469014  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:43.469084  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:43.500825  488914 cri.go:89] found id: ""
	I1202 21:45:43.500839  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.500846  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:43.500854  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:43.500864  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:43.537110  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:43.537127  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.604154  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:43.604172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:43.619529  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:43.619546  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:43.684232  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:43.684242  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:43.684253  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.262533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:46.273030  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:46.273094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:46.298023  488914 cri.go:89] found id: ""
	I1202 21:45:46.298039  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.298045  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:46.298051  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:46.298109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:46.327737  488914 cri.go:89] found id: ""
	I1202 21:45:46.327752  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.327760  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:46.327769  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:46.327834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:46.353980  488914 cri.go:89] found id: ""
	I1202 21:45:46.353994  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.354003  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:46.354008  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:46.354073  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:46.380386  488914 cri.go:89] found id: ""
	I1202 21:45:46.380400  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.380406  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:46.380412  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:46.380480  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:46.406595  488914 cri.go:89] found id: ""
	I1202 21:45:46.406609  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.406616  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:46.406621  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:46.406679  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:46.441216  488914 cri.go:89] found id: ""
	I1202 21:45:46.441230  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.441237  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:46.441242  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:46.441305  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:46.473258  488914 cri.go:89] found id: ""
	I1202 21:45:46.473272  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.473279  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:46.473287  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:46.473298  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:46.490441  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:46.490458  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:46.554481  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:46.554490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:46.554501  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.631777  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:46.631800  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:46.660339  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:46.660355  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:49.231885  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:49.243758  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:49.243823  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:49.268714  488914 cri.go:89] found id: ""
	I1202 21:45:49.268728  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.268735  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:49.268741  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:49.268799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:49.293827  488914 cri.go:89] found id: ""
	I1202 21:45:49.293842  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.293849  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:49.293854  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:49.293919  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:49.319633  488914 cri.go:89] found id: ""
	I1202 21:45:49.319647  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.319654  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:49.319661  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:49.319720  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:49.350167  488914 cri.go:89] found id: ""
	I1202 21:45:49.350181  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.350188  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:49.350193  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:49.350252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:49.375814  488914 cri.go:89] found id: ""
	I1202 21:45:49.375828  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.375835  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:49.375841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:49.375905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:49.400638  488914 cri.go:89] found id: ""
	I1202 21:45:49.400657  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.400664  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:49.400670  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:49.400727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:49.453654  488914 cri.go:89] found id: ""
	I1202 21:45:49.453668  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.453680  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:49.453689  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:49.453699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:49.479146  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:49.479161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:49.548448  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:49.548457  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:49.548468  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:49.628739  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:49.628759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:49.658161  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:49.658177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:52.223612  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:52.234793  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:52.234899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:52.265577  488914 cri.go:89] found id: ""
	I1202 21:45:52.265591  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.265598  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:52.265603  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:52.265663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:52.292373  488914 cri.go:89] found id: ""
	I1202 21:45:52.292387  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.292394  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:52.292399  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:52.292466  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:52.317157  488914 cri.go:89] found id: ""
	I1202 21:45:52.317171  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.317178  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:52.317183  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:52.317240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:52.347843  488914 cri.go:89] found id: ""
	I1202 21:45:52.347856  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.347863  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:52.347868  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:52.347927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:52.372874  488914 cri.go:89] found id: ""
	I1202 21:45:52.372889  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.372895  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:52.372900  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:52.372962  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:52.398247  488914 cri.go:89] found id: ""
	I1202 21:45:52.398260  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.398267  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:52.398273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:52.398330  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:52.445693  488914 cri.go:89] found id: ""
	I1202 21:45:52.445706  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.445713  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:52.445721  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:52.445732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:52.465150  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:52.465167  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:52.540766  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:52.540776  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:52.540797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:52.618862  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:52.618882  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:52.648548  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:52.648565  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.221074  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:55.231158  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:55.231215  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:55.256269  488914 cri.go:89] found id: ""
	I1202 21:45:55.256282  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.256289  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:55.256294  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:55.256371  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:55.281345  488914 cri.go:89] found id: ""
	I1202 21:45:55.281360  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.281367  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:55.281372  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:55.281430  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:55.306779  488914 cri.go:89] found id: ""
	I1202 21:45:55.306793  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.306799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:55.306805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:55.306865  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:55.333304  488914 cri.go:89] found id: ""
	I1202 21:45:55.333318  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.333325  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:55.333333  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:55.333393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:55.358550  488914 cri.go:89] found id: ""
	I1202 21:45:55.358563  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.358570  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:55.358575  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:55.358638  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:55.387929  488914 cri.go:89] found id: ""
	I1202 21:45:55.387943  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.387951  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:55.387957  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:55.388020  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:55.426649  488914 cri.go:89] found id: ""
	I1202 21:45:55.426663  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.426670  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:55.426678  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:55.426687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:55.519746  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:55.519772  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:55.554225  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:55.554241  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.622464  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:55.622484  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:55.638187  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:55.638213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:55.703154  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.203385  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:58.213686  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:58.213750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:58.239330  488914 cri.go:89] found id: ""
	I1202 21:45:58.239344  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.239351  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:58.239356  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:58.239416  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:58.264371  488914 cri.go:89] found id: ""
	I1202 21:45:58.264385  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.264392  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:58.264397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:58.264454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:58.289420  488914 cri.go:89] found id: ""
	I1202 21:45:58.289434  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.289441  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:58.289446  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:58.289504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:58.317750  488914 cri.go:89] found id: ""
	I1202 21:45:58.317764  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.317772  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:58.317777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:58.317834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:58.341672  488914 cri.go:89] found id: ""
	I1202 21:45:58.341687  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.341694  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:58.341699  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:58.341764  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:58.366074  488914 cri.go:89] found id: ""
	I1202 21:45:58.366088  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.366094  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:58.366099  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:58.366160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:58.390704  488914 cri.go:89] found id: ""
	I1202 21:45:58.390718  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.390724  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:58.390741  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:58.390751  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:58.474575  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.474586  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:58.474598  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:58.558574  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:58.558604  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:58.589663  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:58.589680  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:58.656150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:58.656169  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.173977  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:01.186201  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:01.186270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:01.213408  488914 cri.go:89] found id: ""
	I1202 21:46:01.213424  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.213430  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:01.213436  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:01.213502  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:01.239993  488914 cri.go:89] found id: ""
	I1202 21:46:01.240007  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.240014  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:01.240019  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:01.240079  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:01.266106  488914 cri.go:89] found id: ""
	I1202 21:46:01.266120  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.266127  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:01.266132  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:01.266194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:01.292600  488914 cri.go:89] found id: ""
	I1202 21:46:01.292614  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.292621  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:01.292627  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:01.292689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:01.318438  488914 cri.go:89] found id: ""
	I1202 21:46:01.318453  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.318460  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:01.318466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:01.318530  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:01.344830  488914 cri.go:89] found id: ""
	I1202 21:46:01.344843  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.344850  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:01.344856  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:01.344914  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:01.370509  488914 cri.go:89] found id: ""
	I1202 21:46:01.370523  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.370534  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:01.370541  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:01.370551  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:01.400108  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:01.400123  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:01.484583  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:01.484603  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.501311  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:01.501329  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:01.571182  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:01.571193  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:01.571204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.148935  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:04.159286  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:04.159346  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:04.191266  488914 cri.go:89] found id: ""
	I1202 21:46:04.191279  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.191286  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:04.191291  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:04.191350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:04.217195  488914 cri.go:89] found id: ""
	I1202 21:46:04.217209  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.217216  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:04.217221  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:04.217285  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:04.243674  488914 cri.go:89] found id: ""
	I1202 21:46:04.243689  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.243696  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:04.243701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:04.243760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:04.269892  488914 cri.go:89] found id: ""
	I1202 21:46:04.269905  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.269921  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:04.269927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:04.269998  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:04.296688  488914 cri.go:89] found id: ""
	I1202 21:46:04.296703  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.296711  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:04.296717  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:04.296785  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:04.322967  488914 cri.go:89] found id: ""
	I1202 21:46:04.322981  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.323017  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:04.323023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:04.323091  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:04.348936  488914 cri.go:89] found id: ""
	I1202 21:46:04.348956  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.348963  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:04.348972  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:04.348981  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:04.415190  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:04.415209  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:04.431456  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:04.431472  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:04.504661  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:04.504671  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:04.504682  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.581468  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:04.581487  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:07.110404  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:07.120667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:07.120727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:07.145924  488914 cri.go:89] found id: ""
	I1202 21:46:07.145938  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.145945  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:07.145950  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:07.146010  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:07.171187  488914 cri.go:89] found id: ""
	I1202 21:46:07.171200  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.171207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:07.171212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:07.171270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:07.197187  488914 cri.go:89] found id: ""
	I1202 21:46:07.197201  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.197208  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:07.197213  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:07.197272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:07.222713  488914 cri.go:89] found id: ""
	I1202 21:46:07.222728  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.222735  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:07.222740  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:07.222800  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:07.249213  488914 cri.go:89] found id: ""
	I1202 21:46:07.249226  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.249233  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:07.249239  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:07.249301  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:07.275464  488914 cri.go:89] found id: ""
	I1202 21:46:07.275478  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.275484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:07.275490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:07.275546  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:07.305137  488914 cri.go:89] found id: ""
	I1202 21:46:07.305151  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.305166  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:07.305174  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:07.305187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:07.370440  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:07.370459  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:07.386336  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:07.386354  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:07.458373  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:07.458383  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:07.458395  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:07.542802  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:07.542822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:10.076833  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:10.087724  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:10.087819  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:10.114700  488914 cri.go:89] found id: ""
	I1202 21:46:10.114714  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.114722  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:10.114728  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:10.114794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:10.140632  488914 cri.go:89] found id: ""
	I1202 21:46:10.140646  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.140652  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:10.140658  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:10.140715  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:10.169820  488914 cri.go:89] found id: ""
	I1202 21:46:10.169834  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.169841  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:10.169850  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:10.169911  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:10.195172  488914 cri.go:89] found id: ""
	I1202 21:46:10.195186  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.195193  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:10.195199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:10.195262  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:10.229303  488914 cri.go:89] found id: ""
	I1202 21:46:10.229317  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.229324  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:10.229330  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:10.229392  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:10.257081  488914 cri.go:89] found id: ""
	I1202 21:46:10.257096  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.257102  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:10.257108  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:10.257168  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:10.283246  488914 cri.go:89] found id: ""
	I1202 21:46:10.283259  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.283267  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:10.283274  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:10.283284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:10.351168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:10.351187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:10.366368  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:10.366385  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:10.438623  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:10.438633  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:10.438646  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:10.516775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:10.516796  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:13.045661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:13.056197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:13.056259  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:13.087662  488914 cri.go:89] found id: ""
	I1202 21:46:13.087675  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.087682  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:13.087688  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:13.087748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:13.113347  488914 cri.go:89] found id: ""
	I1202 21:46:13.113361  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.113368  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:13.113373  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:13.113432  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:13.139083  488914 cri.go:89] found id: ""
	I1202 21:46:13.139098  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.139105  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:13.139110  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:13.139181  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:13.165107  488914 cri.go:89] found id: ""
	I1202 21:46:13.165121  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.165128  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:13.165133  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:13.165196  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:13.190075  488914 cri.go:89] found id: ""
	I1202 21:46:13.190090  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.190107  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:13.190113  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:13.190180  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:13.219255  488914 cri.go:89] found id: ""
	I1202 21:46:13.219269  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.219276  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:13.219281  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:13.219342  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:13.245328  488914 cri.go:89] found id: ""
	I1202 21:46:13.245342  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.245350  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:13.245358  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:13.245369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:13.310150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:13.310168  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:13.325530  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:13.325550  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:13.389916  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:13.389926  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:13.389938  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:13.474064  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:13.474083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:16.007285  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:16.018077  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:16.018147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:16.048444  488914 cri.go:89] found id: ""
	I1202 21:46:16.048458  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.048465  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:16.048477  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:16.048539  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:16.075066  488914 cri.go:89] found id: ""
	I1202 21:46:16.075079  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.075085  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:16.075090  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:16.075152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:16.100648  488914 cri.go:89] found id: ""
	I1202 21:46:16.100662  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.100669  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:16.100674  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:16.100732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:16.131449  488914 cri.go:89] found id: ""
	I1202 21:46:16.131463  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.131470  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:16.131475  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:16.131534  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:16.158249  488914 cri.go:89] found id: ""
	I1202 21:46:16.158263  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.158270  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:16.158276  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:16.158340  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:16.183613  488914 cri.go:89] found id: ""
	I1202 21:46:16.183627  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.183633  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:16.183641  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:16.183702  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:16.209461  488914 cri.go:89] found id: ""
	I1202 21:46:16.209475  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.209483  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:16.209490  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:16.209500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:16.275500  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:16.275520  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:16.291181  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:16.291196  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:16.361346  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:16.361356  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:16.361368  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:16.437676  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:16.437697  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:18.967950  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:18.977983  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:18.978057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:19.007682  488914 cri.go:89] found id: ""
	I1202 21:46:19.007706  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.007714  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:19.007720  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:19.007794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:19.033939  488914 cri.go:89] found id: ""
	I1202 21:46:19.033961  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.033969  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:19.033975  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:19.034042  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:19.059516  488914 cri.go:89] found id: ""
	I1202 21:46:19.059531  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.059544  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:19.059550  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:19.059616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:19.086051  488914 cri.go:89] found id: ""
	I1202 21:46:19.086065  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.086072  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:19.086078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:19.086135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:19.110886  488914 cri.go:89] found id: ""
	I1202 21:46:19.110899  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.110906  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:19.110911  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:19.110969  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:19.137589  488914 cri.go:89] found id: ""
	I1202 21:46:19.137603  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.137610  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:19.137615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:19.137673  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:19.162755  488914 cri.go:89] found id: ""
	I1202 21:46:19.162769  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.162776  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:19.162784  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:19.162794  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:19.189873  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:19.189888  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:19.255357  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:19.255375  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:19.270844  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:19.270861  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:19.340061  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:19.340072  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:19.340089  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:21.925504  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:21.935839  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:21.935899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:21.960350  488914 cri.go:89] found id: ""
	I1202 21:46:21.960363  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.960370  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:21.960375  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:21.960434  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:21.986080  488914 cri.go:89] found id: ""
	I1202 21:46:21.986097  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.986105  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:21.986112  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:21.986174  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:22.014687  488914 cri.go:89] found id: ""
	I1202 21:46:22.014702  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.014709  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:22.014715  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:22.014778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:22.042230  488914 cri.go:89] found id: ""
	I1202 21:46:22.042245  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.042252  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:22.042257  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:22.042320  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:22.072112  488914 cri.go:89] found id: ""
	I1202 21:46:22.072126  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.072134  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:22.072139  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:22.072210  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:22.098531  488914 cri.go:89] found id: ""
	I1202 21:46:22.098555  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.098562  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:22.098568  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:22.098649  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:22.124074  488914 cri.go:89] found id: ""
	I1202 21:46:22.124088  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.124095  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:22.124102  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:22.124112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:22.190291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:22.190311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:22.205264  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:22.205283  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:22.273286  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:22.273308  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:22.273321  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:22.349070  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:22.349090  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:24.882662  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:24.893199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:24.893260  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:24.918892  488914 cri.go:89] found id: ""
	I1202 21:46:24.918906  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.918913  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:24.918918  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:24.918977  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:24.944030  488914 cri.go:89] found id: ""
	I1202 21:46:24.944043  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.944050  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:24.944055  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:24.944115  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:24.969743  488914 cri.go:89] found id: ""
	I1202 21:46:24.969758  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.969765  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:24.969770  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:24.969827  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:25.003432  488914 cri.go:89] found id: ""
	I1202 21:46:25.003449  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.003459  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:25.003466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:25.003573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:25.030965  488914 cri.go:89] found id: ""
	I1202 21:46:25.030979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.030985  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:25.030991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:25.031072  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:25.057965  488914 cri.go:89] found id: ""
	I1202 21:46:25.057979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.057986  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:25.057991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:25.058048  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:25.085099  488914 cri.go:89] found id: ""
	I1202 21:46:25.085113  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.085129  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:25.085137  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:25.085147  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:25.115538  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:25.115553  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:25.181412  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:25.181432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:25.196691  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:25.196712  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:25.261474  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:25.261490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:25.261500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:27.838685  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:27.849142  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:27.849203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:27.874519  488914 cri.go:89] found id: ""
	I1202 21:46:27.874533  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.874539  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:27.874545  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:27.874603  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:27.900185  488914 cri.go:89] found id: ""
	I1202 21:46:27.900198  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.900207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:27.900212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:27.900270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:27.926179  488914 cri.go:89] found id: ""
	I1202 21:46:27.926202  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.926209  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:27.926215  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:27.926280  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:27.951950  488914 cri.go:89] found id: ""
	I1202 21:46:27.951964  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.951971  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:27.951977  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:27.952034  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:27.976779  488914 cri.go:89] found id: ""
	I1202 21:46:27.976793  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.976799  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:27.976804  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:27.976864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:28.013447  488914 cri.go:89] found id: ""
	I1202 21:46:28.013462  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.013479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:28.013495  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:28.013562  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:28.041485  488914 cri.go:89] found id: ""
	I1202 21:46:28.041508  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.041516  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:28.041524  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:28.041536  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:28.057180  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:28.057197  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:28.121537  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:28.121548  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:28.121559  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:28.197190  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:28.197210  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:28.229525  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:28.229541  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:30.795826  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:30.806266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:30.806329  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:30.834208  488914 cri.go:89] found id: ""
	I1202 21:46:30.834222  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.834229  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:30.834234  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:30.834293  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:30.859664  488914 cri.go:89] found id: ""
	I1202 21:46:30.859678  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.859685  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:30.859690  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:30.859748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:30.889034  488914 cri.go:89] found id: ""
	I1202 21:46:30.889048  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.889055  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:30.889061  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:30.889117  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:30.914676  488914 cri.go:89] found id: ""
	I1202 21:46:30.914689  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.914696  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:30.914701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:30.914759  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:30.939761  488914 cri.go:89] found id: ""
	I1202 21:46:30.939774  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.939782  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:30.939787  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:30.939843  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:30.965463  488914 cri.go:89] found id: ""
	I1202 21:46:30.965476  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.965483  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:30.965488  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:30.965545  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:30.990187  488914 cri.go:89] found id: ""
	I1202 21:46:30.990200  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.990206  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:30.990224  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:30.990236  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:31.005797  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:31.005813  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:31.069684  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:31.069694  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:31.069707  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:31.145787  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:31.145809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:31.178743  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:31.178759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:33.744496  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:33.754580  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:33.754651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:33.779528  488914 cri.go:89] found id: ""
	I1202 21:46:33.779541  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.779548  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:33.779554  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:33.779616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:33.804198  488914 cri.go:89] found id: ""
	I1202 21:46:33.804212  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.804219  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:33.804227  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:33.804289  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:33.829645  488914 cri.go:89] found id: ""
	I1202 21:46:33.829659  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.829666  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:33.829675  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:33.829734  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:33.858338  488914 cri.go:89] found id: ""
	I1202 21:46:33.858352  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.858368  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:33.858375  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:33.858433  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:33.884555  488914 cri.go:89] found id: ""
	I1202 21:46:33.884570  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.884578  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:33.884583  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:33.884651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:33.912967  488914 cri.go:89] found id: ""
	I1202 21:46:33.912981  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.912988  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:33.912994  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:33.913055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:33.938088  488914 cri.go:89] found id: ""
	I1202 21:46:33.938102  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.938110  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:33.938118  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:33.938133  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:34.003604  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:34.003631  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:34.022128  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:34.022146  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:34.092004  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:34.092015  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:34.092029  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:34.169499  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:34.169519  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:36.700051  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:36.711435  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:36.711497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:36.738690  488914 cri.go:89] found id: ""
	I1202 21:46:36.738704  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.738711  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:36.738717  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:36.738776  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:36.765789  488914 cri.go:89] found id: ""
	I1202 21:46:36.765802  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.765810  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:36.765815  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:36.765880  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:36.790056  488914 cri.go:89] found id: ""
	I1202 21:46:36.790070  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.790077  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:36.790082  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:36.790138  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:36.818201  488914 cri.go:89] found id: ""
	I1202 21:46:36.818214  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.818221  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:36.818227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:36.818288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:36.845623  488914 cri.go:89] found id: ""
	I1202 21:46:36.845637  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.845644  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:36.845650  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:36.845710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:36.871336  488914 cri.go:89] found id: ""
	I1202 21:46:36.871350  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.871357  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:36.871362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:36.871427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:36.897589  488914 cri.go:89] found id: ""
	I1202 21:46:36.897605  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.897611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:36.897619  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:36.897630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:36.913198  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:36.913213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:36.973711  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:36.973721  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:36.973732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:37.054868  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:37.054889  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:37.083961  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:37.083976  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.651305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:39.662125  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:39.662189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:39.693251  488914 cri.go:89] found id: ""
	I1202 21:46:39.693264  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.693271  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:39.693277  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:39.693333  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:39.720953  488914 cri.go:89] found id: ""
	I1202 21:46:39.720969  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.720976  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:39.720981  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:39.721039  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:39.747423  488914 cri.go:89] found id: ""
	I1202 21:46:39.747436  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.747443  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:39.747448  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:39.747512  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:39.773314  488914 cri.go:89] found id: ""
	I1202 21:46:39.773328  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.773335  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:39.773340  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:39.773396  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:39.801946  488914 cri.go:89] found id: ""
	I1202 21:46:39.801960  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.801966  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:39.801971  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:39.802027  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:39.831169  488914 cri.go:89] found id: ""
	I1202 21:46:39.831182  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.831189  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:39.831195  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:39.831255  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:39.855958  488914 cri.go:89] found id: ""
	I1202 21:46:39.855972  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.855979  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:39.855987  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:39.855997  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.921041  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:39.921076  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:39.936417  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:39.936433  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:40.005449  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:40.005465  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:40.005479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:40.099731  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:40.099754  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.632158  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:42.642592  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:42.642655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:42.680753  488914 cri.go:89] found id: ""
	I1202 21:46:42.680767  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.680774  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:42.680780  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:42.680845  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:42.727033  488914 cri.go:89] found id: ""
	I1202 21:46:42.727047  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.727056  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:42.727062  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:42.727125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:42.753808  488914 cri.go:89] found id: ""
	I1202 21:46:42.753822  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.753829  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:42.753848  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:42.753906  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:42.782178  488914 cri.go:89] found id: ""
	I1202 21:46:42.782192  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.782200  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:42.782206  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:42.782272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:42.807839  488914 cri.go:89] found id: ""
	I1202 21:46:42.807853  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.807860  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:42.807867  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:42.807927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:42.834250  488914 cri.go:89] found id: ""
	I1202 21:46:42.834276  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.834283  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:42.834290  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:42.834355  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:42.861699  488914 cri.go:89] found id: ""
	I1202 21:46:42.861721  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.861728  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:42.861736  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:42.861747  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:42.937587  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:42.937608  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.969352  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:42.969374  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:43.035113  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:43.035138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:43.050909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:43.050924  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:43.116601  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.616905  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:45.627026  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:45.627089  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:45.653296  488914 cri.go:89] found id: ""
	I1202 21:46:45.653311  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.653318  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:45.653323  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:45.653389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:45.685320  488914 cri.go:89] found id: ""
	I1202 21:46:45.685334  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.685342  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:45.685347  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:45.685407  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:45.714439  488914 cri.go:89] found id: ""
	I1202 21:46:45.714453  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.714460  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:45.714466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:45.714524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:45.741650  488914 cri.go:89] found id: ""
	I1202 21:46:45.741665  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.741672  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:45.741678  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:45.741748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:45.768339  488914 cri.go:89] found id: ""
	I1202 21:46:45.768374  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.768381  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:45.768387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:45.768446  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:45.793382  488914 cri.go:89] found id: ""
	I1202 21:46:45.793396  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.793404  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:45.793410  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:45.793470  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:45.821520  488914 cri.go:89] found id: ""
	I1202 21:46:45.821534  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.821541  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:45.821549  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:45.821560  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:45.836636  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:45.836657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:45.903141  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.903152  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:45.903182  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:45.983151  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:45.983172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:46.016509  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:46.016525  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:48.589533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:48.600004  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:48.600063  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:48.624724  488914 cri.go:89] found id: ""
	I1202 21:46:48.624738  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.624745  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:48.624751  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:48.624809  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:48.649307  488914 cri.go:89] found id: ""
	I1202 21:46:48.649322  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.649329  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:48.649335  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:48.649393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:48.689464  488914 cri.go:89] found id: ""
	I1202 21:46:48.689477  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.689484  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:48.689489  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:48.689548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:48.718180  488914 cri.go:89] found id: ""
	I1202 21:46:48.718195  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.718202  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:48.718207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:48.718274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:48.748759  488914 cri.go:89] found id: ""
	I1202 21:46:48.748773  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.748781  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:48.748786  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:48.748847  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:48.773610  488914 cri.go:89] found id: ""
	I1202 21:46:48.773624  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.773631  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:48.773637  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:48.773694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:48.798539  488914 cri.go:89] found id: ""
	I1202 21:46:48.798553  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.798560  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:48.798568  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:48.798580  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:48.813434  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:48.813450  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:48.873005  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:48.873016  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:48.873027  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:48.949124  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:48.949143  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:48.981243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:48.981259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.549061  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:51.558950  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:51.559026  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:51.583587  488914 cri.go:89] found id: ""
	I1202 21:46:51.583601  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.583608  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:51.583614  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:51.583674  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:51.609150  488914 cri.go:89] found id: ""
	I1202 21:46:51.609163  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.609170  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:51.609175  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:51.609237  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:51.634897  488914 cri.go:89] found id: ""
	I1202 21:46:51.634910  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.634917  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:51.634922  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:51.634980  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:51.665746  488914 cri.go:89] found id: ""
	I1202 21:46:51.665760  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.665766  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:51.665772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:51.665830  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:51.704219  488914 cri.go:89] found id: ""
	I1202 21:46:51.704233  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.704240  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:51.704246  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:51.704310  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:51.736171  488914 cri.go:89] found id: ""
	I1202 21:46:51.736194  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.736202  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:51.736207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:51.736274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:51.765446  488914 cri.go:89] found id: ""
	I1202 21:46:51.765469  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.765476  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:51.765484  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:51.765494  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:51.792551  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:51.792566  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.857688  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:51.857706  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:51.873199  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:51.873214  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:51.942299  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:51.942311  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:51.942323  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.519031  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:54.529427  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:54.529497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:54.558708  488914 cri.go:89] found id: ""
	I1202 21:46:54.558722  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.558729  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:54.558735  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:54.558796  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:54.583135  488914 cri.go:89] found id: ""
	I1202 21:46:54.583148  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.583155  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:54.583160  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:54.583221  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:54.609361  488914 cri.go:89] found id: ""
	I1202 21:46:54.609382  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.609390  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:54.609396  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:54.609461  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:54.637663  488914 cri.go:89] found id: ""
	I1202 21:46:54.637677  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.637683  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:54.637691  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:54.637748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:54.666901  488914 cri.go:89] found id: ""
	I1202 21:46:54.666915  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.666922  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:54.666927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:54.666987  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:54.695329  488914 cri.go:89] found id: ""
	I1202 21:46:54.695343  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.695350  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:54.695355  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:54.695413  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:54.724947  488914 cri.go:89] found id: ""
	I1202 21:46:54.724961  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.724967  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:54.724975  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:54.724986  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:54.742963  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:54.742980  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:54.810513  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:54.810523  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:54.810534  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.883552  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:54.883571  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:54.911389  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:54.911406  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.481762  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:57.492870  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:57.492930  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:57.517199  488914 cri.go:89] found id: ""
	I1202 21:46:57.517213  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.517220  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:57.517225  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:57.517292  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:57.543039  488914 cri.go:89] found id: ""
	I1202 21:46:57.543053  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.543060  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:57.543066  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:57.543130  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:57.567509  488914 cri.go:89] found id: ""
	I1202 21:46:57.567524  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.567530  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:57.567536  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:57.567597  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:57.593052  488914 cri.go:89] found id: ""
	I1202 21:46:57.593074  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.593081  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:57.593087  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:57.593151  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:57.618537  488914 cri.go:89] found id: ""
	I1202 21:46:57.618551  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.618558  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:57.618563  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:57.618626  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:57.645917  488914 cri.go:89] found id: ""
	I1202 21:46:57.645931  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.645938  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:57.645943  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:57.646003  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:57.673325  488914 cri.go:89] found id: ""
	I1202 21:46:57.673338  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.673353  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:57.673362  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:57.673378  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:57.748284  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:57.748294  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:57.748305  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:57.828296  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:57.828314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:57.855830  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:57.855846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.921121  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:57.921140  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:00.436836  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:00.448366  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:00.448436  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:00.478939  488914 cri.go:89] found id: ""
	I1202 21:47:00.478953  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.478960  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:00.478969  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:00.479059  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:00.505959  488914 cri.go:89] found id: ""
	I1202 21:47:00.505974  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.505981  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:00.505986  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:00.506050  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:00.532568  488914 cri.go:89] found id: ""
	I1202 21:47:00.532584  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.532597  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:00.532602  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:00.532667  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:00.561666  488914 cri.go:89] found id: ""
	I1202 21:47:00.561680  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.561687  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:00.561692  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:00.561753  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:00.588051  488914 cri.go:89] found id: ""
	I1202 21:47:00.588065  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.588072  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:00.588078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:00.588139  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:00.612422  488914 cri.go:89] found id: ""
	I1202 21:47:00.612437  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.612443  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:00.612449  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:00.612513  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:00.642069  488914 cri.go:89] found id: ""
	I1202 21:47:00.642082  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.642089  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:00.642097  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:00.642108  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:00.727511  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:00.727520  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:00.727531  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:00.803650  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:00.803671  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:00.832608  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:00.832624  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:00.900692  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:00.900713  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.417333  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:03.427135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:03.427205  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:03.451551  488914 cri.go:89] found id: ""
	I1202 21:47:03.451566  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.451573  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:03.451578  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:03.451635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:03.476736  488914 cri.go:89] found id: ""
	I1202 21:47:03.476750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.476757  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:03.476763  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:03.476825  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:03.501736  488914 cri.go:89] found id: ""
	I1202 21:47:03.501750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.501756  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:03.501761  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:03.501820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:03.527339  488914 cri.go:89] found id: ""
	I1202 21:47:03.527353  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.527360  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:03.527365  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:03.527427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:03.552910  488914 cri.go:89] found id: ""
	I1202 21:47:03.552923  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.552930  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:03.552936  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:03.552994  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:03.578110  488914 cri.go:89] found id: ""
	I1202 21:47:03.578124  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.578130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:03.578135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:03.578194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:03.603194  488914 cri.go:89] found id: ""
	I1202 21:47:03.603208  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.603215  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:03.603223  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:03.603233  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:03.688154  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:03.688174  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:03.725392  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:03.725408  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:03.791852  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:03.791873  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.807065  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:03.807080  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:03.882666  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.384350  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:06.394676  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:06.394749  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:06.423508  488914 cri.go:89] found id: ""
	I1202 21:47:06.423523  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.423530  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:06.423536  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:06.423595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:06.449675  488914 cri.go:89] found id: ""
	I1202 21:47:06.449689  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.449696  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:06.449701  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:06.449762  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:06.480053  488914 cri.go:89] found id: ""
	I1202 21:47:06.480066  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.480073  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:06.480078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:06.480140  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:06.508415  488914 cri.go:89] found id: ""
	I1202 21:47:06.508428  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.508435  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:06.508440  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:06.508498  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:06.533743  488914 cri.go:89] found id: ""
	I1202 21:47:06.533756  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.533763  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:06.533776  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:06.533836  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:06.558457  488914 cri.go:89] found id: ""
	I1202 21:47:06.558472  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.558479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:06.558484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:06.558548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:06.585312  488914 cri.go:89] found id: ""
	I1202 21:47:06.585326  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.585333  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:06.585341  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:06.585352  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:06.600648  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:06.600665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:06.677036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.677046  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:06.677058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:06.757223  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:06.757244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:06.785439  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:06.785455  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.357941  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:09.369144  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:09.369207  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:09.398056  488914 cri.go:89] found id: ""
	I1202 21:47:09.398070  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.398077  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:09.398083  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:09.398143  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:09.424606  488914 cri.go:89] found id: ""
	I1202 21:47:09.424620  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.424628  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:09.424633  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:09.424694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:09.451520  488914 cri.go:89] found id: ""
	I1202 21:47:09.451535  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.451542  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:09.451547  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:09.451607  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:09.477315  488914 cri.go:89] found id: ""
	I1202 21:47:09.477330  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.477337  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:09.477344  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:09.477399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:09.503654  488914 cri.go:89] found id: ""
	I1202 21:47:09.503668  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.503675  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:09.503680  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:09.503750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:09.529545  488914 cri.go:89] found id: ""
	I1202 21:47:09.529558  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.529565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:09.529571  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:09.529629  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:09.554726  488914 cri.go:89] found id: ""
	I1202 21:47:09.554740  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.554747  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:09.554754  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:09.554767  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.620273  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:09.620293  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:09.635655  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:09.635672  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:09.720524  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:09.720534  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:09.720544  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:09.800379  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:09.800400  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:12.331221  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:12.341899  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:12.341957  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:12.369642  488914 cri.go:89] found id: ""
	I1202 21:47:12.369656  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.369663  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:12.369668  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:12.369729  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:12.395917  488914 cri.go:89] found id: ""
	I1202 21:47:12.395930  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.395938  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:12.395943  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:12.396015  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:12.422817  488914 cri.go:89] found id: ""
	I1202 21:47:12.422831  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.422838  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:12.422843  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:12.422903  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:12.451973  488914 cri.go:89] found id: ""
	I1202 21:47:12.451986  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.451993  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:12.451998  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:12.452057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:12.477543  488914 cri.go:89] found id: ""
	I1202 21:47:12.477557  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.477564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:12.477569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:12.477627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:12.504941  488914 cri.go:89] found id: ""
	I1202 21:47:12.504954  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.504961  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:12.504967  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:12.505025  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:12.530800  488914 cri.go:89] found id: ""
	I1202 21:47:12.530821  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.530828  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:12.530836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:12.530846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:12.596910  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:12.596929  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:12.612316  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:12.612333  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:12.684014  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:12.684025  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:12.684039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:12.771749  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:12.771771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:15.304325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:15.315385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:15.315451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:15.341411  488914 cri.go:89] found id: ""
	I1202 21:47:15.341427  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.341434  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:15.341439  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:15.341501  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:15.366798  488914 cri.go:89] found id: ""
	I1202 21:47:15.366811  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.366818  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:15.366824  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:15.366884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:15.391138  488914 cri.go:89] found id: ""
	I1202 21:47:15.391152  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.391159  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:15.391164  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:15.391226  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:15.415514  488914 cri.go:89] found id: ""
	I1202 21:47:15.415528  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.415535  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:15.415540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:15.415595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:15.440750  488914 cri.go:89] found id: ""
	I1202 21:47:15.440764  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.440771  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:15.440777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:15.440839  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:15.469806  488914 cri.go:89] found id: ""
	I1202 21:47:15.469820  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.469827  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:15.469833  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:15.469891  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:15.497648  488914 cri.go:89] found id: ""
	I1202 21:47:15.497661  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.497668  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:15.497675  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:15.497687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:15.567654  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:15.567679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:15.582770  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:15.582785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:15.647132  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:15.647143  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:15.647154  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:15.740463  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:15.740492  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.270232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:18.280720  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:18.280782  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:18.305710  488914 cri.go:89] found id: ""
	I1202 21:47:18.305724  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.305731  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:18.305736  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:18.305793  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:18.329526  488914 cri.go:89] found id: ""
	I1202 21:47:18.329539  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.329545  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:18.329550  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:18.329606  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:18.355166  488914 cri.go:89] found id: ""
	I1202 21:47:18.355195  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.355202  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:18.355207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:18.355275  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:18.381992  488914 cri.go:89] found id: ""
	I1202 21:47:18.382006  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.382013  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:18.382018  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:18.382080  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:18.410268  488914 cri.go:89] found id: ""
	I1202 21:47:18.410283  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.410290  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:18.410296  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:18.410354  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:18.434607  488914 cri.go:89] found id: ""
	I1202 21:47:18.434620  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.434627  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:18.434632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:18.434689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:18.460092  488914 cri.go:89] found id: ""
	I1202 21:47:18.460106  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.460112  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:18.460120  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:18.460130  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:18.525571  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:18.525580  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:18.525591  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:18.601752  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:18.601776  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.631242  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:18.631258  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:18.706458  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:18.706478  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.222232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:21.232120  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:21.232178  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:21.257057  488914 cri.go:89] found id: ""
	I1202 21:47:21.257071  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.257078  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:21.257089  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:21.257145  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:21.281739  488914 cri.go:89] found id: ""
	I1202 21:47:21.281752  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.281759  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:21.281764  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:21.281820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:21.306878  488914 cri.go:89] found id: ""
	I1202 21:47:21.306892  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.306899  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:21.306905  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:21.306959  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:21.332327  488914 cri.go:89] found id: ""
	I1202 21:47:21.332340  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.332347  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:21.332352  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:21.332408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:21.356717  488914 cri.go:89] found id: ""
	I1202 21:47:21.356730  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.356737  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:21.356742  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:21.356799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:21.380787  488914 cri.go:89] found id: ""
	I1202 21:47:21.380801  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.380807  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:21.380813  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:21.380867  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:21.405984  488914 cri.go:89] found id: ""
	I1202 21:47:21.405998  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.406005  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:21.406013  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:21.406023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:21.438420  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:21.438435  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:21.503149  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:21.503170  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.518755  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:21.518771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:21.584415  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:21.584425  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:21.584437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.161915  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:24.172338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:24.172401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:24.197081  488914 cri.go:89] found id: ""
	I1202 21:47:24.197095  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.197102  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:24.197108  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:24.197166  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:24.222792  488914 cri.go:89] found id: ""
	I1202 21:47:24.222806  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.222827  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:24.222833  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:24.222898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:24.248463  488914 cri.go:89] found id: ""
	I1202 21:47:24.248486  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.248495  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:24.248500  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:24.248561  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:24.282539  488914 cri.go:89] found id: ""
	I1202 21:47:24.282554  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.282561  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:24.282567  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:24.282636  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:24.308071  488914 cri.go:89] found id: ""
	I1202 21:47:24.308086  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.308093  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:24.308098  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:24.308165  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:24.333666  488914 cri.go:89] found id: ""
	I1202 21:47:24.333689  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.333696  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:24.333702  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:24.333769  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:24.363212  488914 cri.go:89] found id: ""
	I1202 21:47:24.363226  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.363233  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:24.363254  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:24.363265  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:24.428642  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:24.428664  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:24.444347  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:24.444363  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:24.510036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:24.510047  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:24.510058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.585705  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:24.585726  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:27.116827  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:27.127233  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:27.127299  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:27.156311  488914 cri.go:89] found id: ""
	I1202 21:47:27.156325  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.156332  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:27.156337  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:27.156401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:27.180597  488914 cri.go:89] found id: ""
	I1202 21:47:27.180611  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.180617  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:27.180623  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:27.180682  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:27.205333  488914 cri.go:89] found id: ""
	I1202 21:47:27.205347  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.205354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:27.205359  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:27.205417  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:27.231165  488914 cri.go:89] found id: ""
	I1202 21:47:27.231179  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.231186  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:27.231192  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:27.231251  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:27.260640  488914 cri.go:89] found id: ""
	I1202 21:47:27.260654  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.260662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:27.260667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:27.260732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:27.286552  488914 cri.go:89] found id: ""
	I1202 21:47:27.286566  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.286573  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:27.286578  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:27.286637  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:27.311590  488914 cri.go:89] found id: ""
	I1202 21:47:27.311604  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.311611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:27.311619  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:27.311630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:27.376291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:27.376311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:27.391299  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:27.391314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:27.452046  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:27.452056  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:27.452067  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:27.527099  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:27.527119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.055495  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:30.067197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:30.067272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:30.093385  488914 cri.go:89] found id: ""
	I1202 21:47:30.093400  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.093407  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:30.093413  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:30.093475  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:30.120468  488914 cri.go:89] found id: ""
	I1202 21:47:30.120482  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.120490  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:30.120495  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:30.120558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:30.147744  488914 cri.go:89] found id: ""
	I1202 21:47:30.147759  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.147767  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:30.147772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:30.147838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:30.173628  488914 cri.go:89] found id: ""
	I1202 21:47:30.173650  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.173658  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:30.173664  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:30.173742  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:30.201952  488914 cri.go:89] found id: ""
	I1202 21:47:30.201992  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.202001  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:30.202007  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:30.202075  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:30.228366  488914 cri.go:89] found id: ""
	I1202 21:47:30.228380  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.228387  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:30.228399  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:30.228468  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:30.254412  488914 cri.go:89] found id: ""
	I1202 21:47:30.254426  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.254434  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:30.254442  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:30.254453  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:30.330454  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:30.330474  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.364243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:30.364259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:30.429823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:30.429841  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:30.445036  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:30.445058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:30.506029  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.006821  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:33.017853  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:33.017924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:33.043314  488914 cri.go:89] found id: ""
	I1202 21:47:33.043328  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.043335  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:33.043343  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:33.043402  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:33.068806  488914 cri.go:89] found id: ""
	I1202 21:47:33.068820  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.068826  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:33.068831  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:33.068889  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:33.097822  488914 cri.go:89] found id: ""
	I1202 21:47:33.097835  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.097842  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:33.097847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:33.097905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:33.123154  488914 cri.go:89] found id: ""
	I1202 21:47:33.123168  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.123176  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:33.123181  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:33.123240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:33.148284  488914 cri.go:89] found id: ""
	I1202 21:47:33.148298  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.148305  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:33.148310  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:33.148369  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:33.173434  488914 cri.go:89] found id: ""
	I1202 21:47:33.173448  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.173454  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:33.173460  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:33.173519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:33.198619  488914 cri.go:89] found id: ""
	I1202 21:47:33.198633  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.198640  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:33.198647  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:33.198662  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:33.263426  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:33.263446  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:33.279026  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:33.279042  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:33.339351  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.339361  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:33.339372  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:33.418569  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:33.418588  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:35.951124  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:35.962387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:35.962491  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:35.989088  488914 cri.go:89] found id: ""
	I1202 21:47:35.989102  488914 logs.go:282] 0 containers: []
	W1202 21:47:35.989109  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:35.989115  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:35.989176  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:36.017461  488914 cri.go:89] found id: ""
	I1202 21:47:36.017477  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.017484  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:36.017490  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:36.017614  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:36.046790  488914 cri.go:89] found id: ""
	I1202 21:47:36.046805  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.046812  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:36.046817  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:36.046875  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:36.073683  488914 cri.go:89] found id: ""
	I1202 21:47:36.073697  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.073704  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:36.073710  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:36.073767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:36.101900  488914 cri.go:89] found id: ""
	I1202 21:47:36.101914  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.101921  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:36.101926  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:36.101985  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:36.130435  488914 cri.go:89] found id: ""
	I1202 21:47:36.130449  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.130456  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:36.130462  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:36.130524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:36.157134  488914 cri.go:89] found id: ""
	I1202 21:47:36.157148  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.157155  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:36.157163  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:36.157173  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:36.221900  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:36.221919  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:36.237051  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:36.237068  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:36.299876  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:36.299886  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:36.299910  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:36.374213  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:36.374232  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:38.902545  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:38.913357  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:38.913415  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:38.944543  488914 cri.go:89] found id: ""
	I1202 21:47:38.944557  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.944563  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:38.944569  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:38.944627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:38.975916  488914 cri.go:89] found id: ""
	I1202 21:47:38.975930  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.975937  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:38.975942  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:38.976001  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:39.009795  488914 cri.go:89] found id: ""
	I1202 21:47:39.009810  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.009817  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:39.009823  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:39.009886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:39.034688  488914 cri.go:89] found id: ""
	I1202 21:47:39.034718  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.034726  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:39.034732  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:39.034805  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:39.059667  488914 cri.go:89] found id: ""
	I1202 21:47:39.059693  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.059701  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:39.059706  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:39.059767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:39.085837  488914 cri.go:89] found id: ""
	I1202 21:47:39.085851  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.085868  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:39.085873  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:39.085941  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:39.111280  488914 cri.go:89] found id: ""
	I1202 21:47:39.111295  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.111302  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:39.111310  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:39.111320  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:39.175646  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:39.175668  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:39.190971  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:39.190987  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:39.258563  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:39.258573  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:39.258584  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:39.333779  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:39.333798  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:41.863817  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:41.873822  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:41.873882  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:41.899560  488914 cri.go:89] found id: ""
	I1202 21:47:41.899585  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.899592  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:41.899598  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:41.899663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:41.937866  488914 cri.go:89] found id: ""
	I1202 21:47:41.937880  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.937887  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:41.937892  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:41.937960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:41.971862  488914 cri.go:89] found id: ""
	I1202 21:47:41.971876  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.971901  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:41.971907  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:41.971975  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:42.010639  488914 cri.go:89] found id: ""
	I1202 21:47:42.010655  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.010663  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:42.010695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:42.010778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:42.040775  488914 cri.go:89] found id: ""
	I1202 21:47:42.040790  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.040800  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:42.040805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:42.040881  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:42.072124  488914 cri.go:89] found id: ""
	I1202 21:47:42.072139  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.072149  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:42.072175  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:42.072252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:42.105424  488914 cri.go:89] found id: ""
	I1202 21:47:42.105439  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.105447  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:42.105456  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:42.105467  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:42.175007  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:42.175032  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:42.194759  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:42.194785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:42.271235  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:42.271247  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:42.271260  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:42.360263  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:42.360296  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:44.892475  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:44.902425  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:44.902484  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:44.929930  488914 cri.go:89] found id: ""
	I1202 21:47:44.929944  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.929952  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:44.929957  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:44.930017  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:44.959205  488914 cri.go:89] found id: ""
	I1202 21:47:44.959219  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.959225  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:44.959231  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:44.959288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:44.991335  488914 cri.go:89] found id: ""
	I1202 21:47:44.991350  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.991357  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:44.991362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:44.991437  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:45.047326  488914 cri.go:89] found id: ""
	I1202 21:47:45.047342  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.047350  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:45.047358  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:45.047440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:45.110770  488914 cri.go:89] found id: ""
	I1202 21:47:45.110787  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.110796  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:45.110803  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:45.110872  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:45.147274  488914 cri.go:89] found id: ""
	I1202 21:47:45.147290  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.147298  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:45.147304  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:45.147372  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:45.230398  488914 cri.go:89] found id: ""
	I1202 21:47:45.230413  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.230421  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:45.230437  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:45.230457  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:45.315457  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:45.315469  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:45.315479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:45.391401  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:45.391421  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:45.422183  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:45.422200  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:45.491250  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:45.491269  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.007522  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:48.019509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:48.019579  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:48.047045  488914 cri.go:89] found id: ""
	I1202 21:47:48.047059  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.047066  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:48.047072  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:48.047133  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:48.073355  488914 cri.go:89] found id: ""
	I1202 21:47:48.073370  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.073377  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:48.073383  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:48.073443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:48.101623  488914 cri.go:89] found id: ""
	I1202 21:47:48.101640  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.101653  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:48.101658  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:48.101728  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:48.128708  488914 cri.go:89] found id: ""
	I1202 21:47:48.128722  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.128729  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:48.128734  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:48.128795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:48.154337  488914 cri.go:89] found id: ""
	I1202 21:47:48.154352  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.154359  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:48.154364  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:48.154426  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:48.181724  488914 cri.go:89] found id: ""
	I1202 21:47:48.181739  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.181746  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:48.181752  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:48.181810  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:48.207628  488914 cri.go:89] found id: ""
	I1202 21:47:48.207641  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.207648  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:48.207655  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:48.207665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:48.273678  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:48.273699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.289393  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:48.289410  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:48.353116  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:48.353126  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:48.353138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:48.429785  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:48.429809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:50.961028  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:50.971337  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:50.971408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:51.004925  488914 cri.go:89] found id: ""
	I1202 21:47:51.004941  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.004949  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:51.004956  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:51.005023  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:51.033852  488914 cri.go:89] found id: ""
	I1202 21:47:51.033866  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.033873  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:51.033879  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:51.033951  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:51.065370  488914 cri.go:89] found id: ""
	I1202 21:47:51.065384  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.065392  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:51.065397  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:51.065454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:51.091797  488914 cri.go:89] found id: ""
	I1202 21:47:51.091811  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.091819  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:51.091824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:51.091886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:51.118245  488914 cri.go:89] found id: ""
	I1202 21:47:51.118260  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.118267  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:51.118273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:51.118350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:51.144813  488914 cri.go:89] found id: ""
	I1202 21:47:51.144828  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.144835  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:51.144841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:51.144898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:51.170591  488914 cri.go:89] found id: ""
	I1202 21:47:51.170605  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.170622  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:51.170630  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:51.170641  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:51.201061  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:51.201078  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:51.268903  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:51.268922  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:51.286516  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:51.286532  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:51.360635  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:51.360647  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:51.360658  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:53.937801  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:53.951326  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:53.951403  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:53.981411  488914 cri.go:89] found id: ""
	I1202 21:47:53.981424  488914 logs.go:282] 0 containers: []
	W1202 21:47:53.981431  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:53.981444  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:53.981504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:54.019553  488914 cri.go:89] found id: ""
	I1202 21:47:54.019568  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.019576  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:54.019581  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:54.019641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:54.045870  488914 cri.go:89] found id: ""
	I1202 21:47:54.045884  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.045891  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:54.045896  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:54.045960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:54.072428  488914 cri.go:89] found id: ""
	I1202 21:47:54.072443  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.072450  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:54.072455  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:54.072519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:54.098413  488914 cri.go:89] found id: ""
	I1202 21:47:54.098427  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.098434  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:54.098439  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:54.098497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:54.124502  488914 cri.go:89] found id: ""
	I1202 21:47:54.124517  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.124524  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:54.124529  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:54.124589  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:54.151244  488914 cri.go:89] found id: ""
	I1202 21:47:54.151258  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.151265  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:54.151273  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:54.151284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:54.213677  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:54.213688  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:54.213700  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:54.289814  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:54.289835  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:54.319415  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:54.319432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:54.385725  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:54.385745  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:56.902920  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:56.915363  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:56.915439  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:56.942569  488914 cri.go:89] found id: ""
	I1202 21:47:56.942583  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.942590  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:56.942596  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:56.942655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:56.975362  488914 cri.go:89] found id: ""
	I1202 21:47:56.975384  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.975391  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:56.975397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:56.975456  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:57.006861  488914 cri.go:89] found id: ""
	I1202 21:47:57.006877  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.006884  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:57.006890  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:57.006958  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:57.033667  488914 cri.go:89] found id: ""
	I1202 21:47:57.033682  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.033689  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:57.033695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:57.033751  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:57.059458  488914 cri.go:89] found id: ""
	I1202 21:47:57.059472  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.059479  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:57.059484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:57.059544  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:57.086098  488914 cri.go:89] found id: ""
	I1202 21:47:57.086112  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.086130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:57.086136  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:57.086206  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:57.112732  488914 cri.go:89] found id: ""
	I1202 21:47:57.112747  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.112754  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:57.112762  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:57.112773  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:57.141211  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:57.141226  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:57.210823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:57.210842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:57.226149  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:57.226166  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:57.287720  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:57.287730  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:57.287742  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:59.865507  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:59.875824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:59.875886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:59.901721  488914 cri.go:89] found id: ""
	I1202 21:47:59.901735  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.901741  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:59.901747  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:59.901834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:59.938763  488914 cri.go:89] found id: ""
	I1202 21:47:59.938777  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.938784  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:59.938789  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:59.938844  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:59.968613  488914 cri.go:89] found id: ""
	I1202 21:47:59.968627  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.968634  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:59.968639  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:59.968696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:00.011145  488914 cri.go:89] found id: ""
	I1202 21:48:00.011162  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.011172  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:00.011179  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:00.011248  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:00.128636  488914 cri.go:89] found id: ""
	I1202 21:48:00.128653  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.128662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:00.128668  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:00.128743  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:00.191602  488914 cri.go:89] found id: ""
	I1202 21:48:00.191633  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.191642  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:00.191651  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:00.191735  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:00.286597  488914 cri.go:89] found id: ""
	I1202 21:48:00.286618  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.286626  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:00.286635  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:00.286657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:00.393972  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:00.394009  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:00.425438  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:00.425462  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:00.522799  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:00.522810  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:00.522822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:00.603332  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:00.603356  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.142041  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:03.152666  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:03.152730  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:03.179575  488914 cri.go:89] found id: ""
	I1202 21:48:03.179589  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.179596  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:03.179601  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:03.179666  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:03.208278  488914 cri.go:89] found id: ""
	I1202 21:48:03.208293  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.208300  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:03.208305  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:03.208365  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:03.237068  488914 cri.go:89] found id: ""
	I1202 21:48:03.237081  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.237088  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:03.237093  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:03.237150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:03.262185  488914 cri.go:89] found id: ""
	I1202 21:48:03.262199  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.262206  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:03.262212  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:03.262270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:03.287056  488914 cri.go:89] found id: ""
	I1202 21:48:03.287076  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.287082  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:03.287088  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:03.287150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:03.312745  488914 cri.go:89] found id: ""
	I1202 21:48:03.312759  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.312766  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:03.312774  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:03.312831  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:03.337493  488914 cri.go:89] found id: ""
	I1202 21:48:03.337507  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.337514  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:03.337522  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:03.337535  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:03.398946  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:03.398957  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:03.398969  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:03.475063  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:03.475083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.502836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:03.502852  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:03.569966  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:03.569985  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.085423  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:06.096220  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:06.096284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:06.124362  488914 cri.go:89] found id: ""
	I1202 21:48:06.124378  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.124384  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:06.124392  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:06.124451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:06.150807  488914 cri.go:89] found id: ""
	I1202 21:48:06.150822  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.150829  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:06.150835  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:06.150896  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:06.177096  488914 cri.go:89] found id: ""
	I1202 21:48:06.177110  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.177117  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:06.177122  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:06.177189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:06.202670  488914 cri.go:89] found id: ""
	I1202 21:48:06.202684  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.202691  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:06.202697  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:06.202760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:06.227599  488914 cri.go:89] found id: ""
	I1202 21:48:06.227614  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.227626  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:06.227632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:06.227692  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:06.252361  488914 cri.go:89] found id: ""
	I1202 21:48:06.252375  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.252381  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:06.252387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:06.252443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:06.278301  488914 cri.go:89] found id: ""
	I1202 21:48:06.278315  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.278323  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:06.278331  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:06.278341  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:06.344608  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:06.344629  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.359909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:06.359925  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:06.427972  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:06.427982  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:06.427993  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:06.503390  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:06.503409  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.032284  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:09.043491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:09.043554  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:09.073343  488914 cri.go:89] found id: ""
	I1202 21:48:09.073358  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.073365  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:09.073371  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:09.073438  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:09.106311  488914 cri.go:89] found id: ""
	I1202 21:48:09.106325  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.106332  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:09.106337  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:09.106400  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:09.137607  488914 cri.go:89] found id: ""
	I1202 21:48:09.137622  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.137630  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:09.137635  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:09.137696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:09.165465  488914 cri.go:89] found id: ""
	I1202 21:48:09.165479  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.165486  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:09.165491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:09.165553  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:09.191695  488914 cri.go:89] found id: ""
	I1202 21:48:09.191709  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.191715  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:09.191721  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:09.191778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:09.217199  488914 cri.go:89] found id: ""
	I1202 21:48:09.217213  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.217221  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:09.217227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:09.217284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:09.243947  488914 cri.go:89] found id: ""
	I1202 21:48:09.243961  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.243977  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:09.243985  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:09.243995  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:09.259022  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:09.259038  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:09.325462  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:09.325472  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:09.325483  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:09.404565  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:09.404586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.435844  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:09.435860  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:12.005527  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:12.017298  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:12.017364  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:12.043631  488914 cri.go:89] found id: ""
	I1202 21:48:12.043645  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.043652  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:12.043657  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:12.043717  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:12.072548  488914 cri.go:89] found id: ""
	I1202 21:48:12.072562  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.072569  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:12.072574  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:12.072634  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:12.097779  488914 cri.go:89] found id: ""
	I1202 21:48:12.097792  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.097799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:12.097806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:12.097861  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:12.122380  488914 cri.go:89] found id: ""
	I1202 21:48:12.122394  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.122400  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:12.122406  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:12.122462  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:12.147485  488914 cri.go:89] found id: ""
	I1202 21:48:12.147499  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.147506  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:12.147511  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:12.147569  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:12.172352  488914 cri.go:89] found id: ""
	I1202 21:48:12.172372  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.172379  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:12.172385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:12.172451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:12.197386  488914 cri.go:89] found id: ""
	I1202 21:48:12.197400  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.197406  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:12.197414  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:12.197425  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:12.212275  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:12.212291  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:12.283599  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:12.283609  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:12.283620  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:12.362146  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:12.362177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:12.394426  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:12.394452  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:14.959300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:14.969317  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:14.969378  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:14.995679  488914 cri.go:89] found id: ""
	I1202 21:48:14.995693  488914 logs.go:282] 0 containers: []
	W1202 21:48:14.995701  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:14.995706  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:14.995767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:15.039291  488914 cri.go:89] found id: ""
	I1202 21:48:15.039307  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.039316  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:15.039322  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:15.039440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:15.066778  488914 cri.go:89] found id: ""
	I1202 21:48:15.066793  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.066800  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:15.066806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:15.066866  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:15.096009  488914 cri.go:89] found id: ""
	I1202 21:48:15.096031  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.096039  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:15.096045  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:15.096109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:15.124965  488914 cri.go:89] found id: ""
	I1202 21:48:15.124980  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.124987  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:15.124992  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:15.125055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:15.151140  488914 cri.go:89] found id: ""
	I1202 21:48:15.151155  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.151162  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:15.151168  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:15.151225  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:15.180343  488914 cri.go:89] found id: ""
	I1202 21:48:15.180362  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.180369  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:15.180378  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:15.180389  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:15.245885  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:15.245905  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:15.261189  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:15.261204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:15.329096  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:15.329106  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:15.329119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:15.404768  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:15.404789  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:17.936657  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:17.948615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:17.948678  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:17.980274  488914 cri.go:89] found id: ""
	I1202 21:48:17.980288  488914 logs.go:282] 0 containers: []
	W1202 21:48:17.980295  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:17.980301  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:17.980358  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:18.009972  488914 cri.go:89] found id: ""
	I1202 21:48:18.009988  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.009995  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:18.010000  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:18.010068  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:18.037292  488914 cri.go:89] found id: ""
	I1202 21:48:18.037307  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.037314  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:18.037320  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:18.037389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:18.068010  488914 cri.go:89] found id: ""
	I1202 21:48:18.068025  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.068034  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:18.068039  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:18.068100  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:18.098519  488914 cri.go:89] found id: ""
	I1202 21:48:18.098537  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.098545  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:18.098552  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:18.098616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:18.125321  488914 cri.go:89] found id: ""
	I1202 21:48:18.125336  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.125343  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:18.125349  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:18.125408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:18.154110  488914 cri.go:89] found id: ""
	I1202 21:48:18.154124  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.154131  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:18.154139  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:18.154161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:18.186862  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:18.186879  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:18.252168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:18.252188  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:18.267297  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:18.267312  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:18.330969  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:18.330979  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:18.330989  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:20.906864  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:20.918719  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:20.918779  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:20.946664  488914 cri.go:89] found id: ""
	I1202 21:48:20.946681  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.946688  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:20.946694  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:20.946757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:20.973074  488914 cri.go:89] found id: ""
	I1202 21:48:20.973088  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.973095  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:20.973100  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:20.973160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:20.998478  488914 cri.go:89] found id: ""
	I1202 21:48:20.998495  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.998503  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:20.998509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:20.998582  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:21.033676  488914 cri.go:89] found id: ""
	I1202 21:48:21.033691  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.033708  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:21.033714  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:21.033773  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:21.059527  488914 cri.go:89] found id: ""
	I1202 21:48:21.059549  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.059557  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:21.059562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:21.059623  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:21.088534  488914 cri.go:89] found id: ""
	I1202 21:48:21.088548  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.088555  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:21.088562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:21.088618  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:21.114102  488914 cri.go:89] found id: ""
	I1202 21:48:21.114116  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.114123  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:21.114130  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:21.114141  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:21.176428  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:21.176438  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:21.176449  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:21.251600  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:21.251621  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:21.278584  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:21.278600  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:21.350258  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:21.350279  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:23.865709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:23.876050  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:23.876119  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:23.906000  488914 cri.go:89] found id: ""
	I1202 21:48:23.906014  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.906021  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:23.906027  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:23.906094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:23.934001  488914 cri.go:89] found id: ""
	I1202 21:48:23.934015  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.934022  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:23.934028  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:23.934088  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:23.969619  488914 cri.go:89] found id: ""
	I1202 21:48:23.969633  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.969640  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:23.969645  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:23.969710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:23.997123  488914 cri.go:89] found id: ""
	I1202 21:48:23.997137  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.997144  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:23.997149  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:23.997211  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:24.027561  488914 cri.go:89] found id: ""
	I1202 21:48:24.027576  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.027584  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:24.027590  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:24.027660  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:24.053543  488914 cri.go:89] found id: ""
	I1202 21:48:24.053558  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.053565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:24.053570  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:24.053641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:24.080080  488914 cri.go:89] found id: ""
	I1202 21:48:24.080094  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.080101  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:24.080109  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:24.080119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:24.147092  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:24.147112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:24.162650  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:24.162666  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:24.225019  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:24.225029  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:24.225039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:24.300286  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:24.300307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:26.831634  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:26.843079  488914 kubeadm.go:602] duration metric: took 4m3.730369294s to restartPrimaryControlPlane
	W1202 21:48:26.843152  488914 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:48:26.843233  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:48:27.259211  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:48:27.272350  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:48:27.280460  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:48:27.280517  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:48:27.288570  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:48:27.288578  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:48:27.288628  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:48:27.296654  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:48:27.296709  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:48:27.304086  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:48:27.311898  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:48:27.311953  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:48:27.319289  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.326825  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:48:27.326888  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.334620  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:48:27.342084  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:48:27.342139  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:48:27.349467  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:48:27.386582  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:48:27.386896  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:48:27.472364  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:48:27.472439  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:48:27.472489  488914 kubeadm.go:319] OS: Linux
	I1202 21:48:27.472545  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:48:27.472601  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:48:27.472644  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:48:27.472700  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:48:27.472753  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:48:27.472804  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:48:27.472859  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:48:27.472915  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:48:27.472973  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:48:27.543309  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:48:27.543431  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:48:27.543527  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:48:27.554036  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:48:27.559373  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:48:27.559468  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:48:27.559542  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:48:27.559629  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:48:27.559701  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:48:27.559787  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:48:27.559841  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:48:27.559915  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:48:27.559985  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:48:27.560076  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:48:27.560159  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:48:27.560210  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:48:27.560269  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:48:27.850282  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:48:28.505037  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:48:28.762985  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:48:28.951263  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:48:29.183372  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:48:29.184043  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:48:29.186561  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:48:29.189676  488914 out.go:252]   - Booting up control plane ...
	I1202 21:48:29.189765  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:48:29.189838  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:48:29.191619  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:48:29.207350  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:48:29.207778  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:48:29.215590  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:48:29.215853  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:48:29.216063  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:48:29.353309  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:48:29.353417  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:52:29.354218  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001230264s
	I1202 21:52:29.354245  488914 kubeadm.go:319] 
	I1202 21:52:29.354298  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:52:29.354329  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:52:29.354427  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:52:29.354432  488914 kubeadm.go:319] 
	I1202 21:52:29.354529  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:52:29.354559  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:52:29.354587  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:52:29.354590  488914 kubeadm.go:319] 
	I1202 21:52:29.358907  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:52:29.359370  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:52:29.359489  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:52:29.359719  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:52:29.359724  488914 kubeadm.go:319] 
	I1202 21:52:29.359816  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:52:29.359952  488914 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230264s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:52:29.360041  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:52:29.774288  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:52:29.786781  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:52:29.786832  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:52:29.794551  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:52:29.794562  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:52:29.794615  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:52:29.802140  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:52:29.802200  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:52:29.809778  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:52:29.817315  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:52:29.817375  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:52:29.824944  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.832581  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:52:29.832636  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.840105  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:52:29.848039  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:52:29.848102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:52:29.855571  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:52:29.895459  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:52:29.895508  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:52:29.966851  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:52:29.966918  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:52:29.966952  488914 kubeadm.go:319] OS: Linux
	I1202 21:52:29.967027  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:52:29.967074  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:52:29.967120  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:52:29.967166  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:52:29.967212  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:52:29.967259  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:52:29.967302  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:52:29.967348  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:52:29.967393  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:52:30.044273  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:52:30.044406  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:52:30.044512  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:52:30.059289  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:52:30.064606  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:52:30.064707  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:52:30.064778  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:52:30.064861  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:52:30.064927  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:52:30.065002  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:52:30.065061  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:52:30.065130  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:52:30.065197  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:52:30.065280  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:52:30.065358  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:52:30.065394  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:52:30.065457  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:52:30.391272  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:52:30.580061  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:52:30.892953  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:52:31.052311  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:52:31.356833  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:52:31.357398  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:52:31.360444  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:52:31.363666  488914 out.go:252]   - Booting up control plane ...
	I1202 21:52:31.363767  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:52:31.363843  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:52:31.364787  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:52:31.380952  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:52:31.381067  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:52:31.389182  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:52:31.389514  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:52:31.389769  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:52:31.510935  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:52:31.511077  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:56:31.511610  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001043188s
	I1202 21:56:31.511635  488914 kubeadm.go:319] 
	I1202 21:56:31.511691  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:56:31.511724  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:56:31.511828  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:56:31.511833  488914 kubeadm.go:319] 
	I1202 21:56:31.511936  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:56:31.511966  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:56:31.511996  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:56:31.511999  488914 kubeadm.go:319] 
	I1202 21:56:31.516147  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:56:31.516591  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:56:31.516707  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:56:31.516982  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:56:31.516989  488914 kubeadm.go:319] 
	I1202 21:56:31.517086  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:56:31.517154  488914 kubeadm.go:403] duration metric: took 12m8.4399317s to StartCluster
	I1202 21:56:31.517186  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:56:31.517279  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:56:31.545508  488914 cri.go:89] found id: ""
	I1202 21:56:31.545521  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.545528  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:56:31.545538  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:56:31.545593  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:56:31.573505  488914 cri.go:89] found id: ""
	I1202 21:56:31.573519  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.573526  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:56:31.573532  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:56:31.573594  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:56:31.598620  488914 cri.go:89] found id: ""
	I1202 21:56:31.598634  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.598642  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:56:31.598647  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:56:31.598718  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:56:31.624500  488914 cri.go:89] found id: ""
	I1202 21:56:31.624514  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.624522  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:56:31.624528  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:56:31.624590  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:56:31.650576  488914 cri.go:89] found id: ""
	I1202 21:56:31.650591  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.650598  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:56:31.650604  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:56:31.650665  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:56:31.677681  488914 cri.go:89] found id: ""
	I1202 21:56:31.677696  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.677703  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:56:31.677709  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:56:31.677772  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:56:31.702889  488914 cri.go:89] found id: ""
	I1202 21:56:31.702903  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.702910  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:56:31.702918  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:56:31.702928  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:56:31.769428  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:56:31.769447  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:56:31.784680  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:56:31.784696  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:56:31.848558  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:56:31.848570  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:56:31.848581  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:56:31.924323  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:56:31.924343  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:56:31.952600  488914 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:56:31.952640  488914 out.go:285] * 
	W1202 21:56:31.952744  488914 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.952799  488914 out.go:285] * 
	W1202 21:56:31.955203  488914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:56:31.960375  488914 out.go:203] 
	W1202 21:56:31.963105  488914 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.963144  488914 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:56:31.963163  488914 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:56:31.966130  488914 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.319873349Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.319910075Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.319954892Z" level=info msg="Create NRI interface"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320093511Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320107279Z" level=info msg="runtime interface created"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320122122Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320128136Z" level=info msg="runtime interface starting up..."
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320134996Z" level=info msg="starting plugins..."
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320149281Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320216843Z" level=info msg="No systemd watchdog enabled"
	Dec 02 21:44:21 functional-066896 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.546712318Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c5da9591-660f-4540-8512-c986d215b6ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.547792951Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b20776b9-be53-485c-9f3f-546c9d76585b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.551358438Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=f47f53c0-c041-4cd6-b337-b6da20818107 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.551918566Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=288944b7-03e2-4e35-a724-f6224d5602e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.552447137Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=86197a73-b72f-4206-9f23-0ccc39ed5484 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.552829015Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=782c6892-90a1-4091-890f-06b9d64d90fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.553189609Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=83b463c2-49c3-44a9-847b-496ac7b6cf23 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.050953685Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b9fc70f0-4149-490c-a9d6-8566800da526 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.054549369Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=400c0ed4-9f98-4d43-b1d5-914d2118a5d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.055394817Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=589e6b72-3a61-4459-a0b4-17ed249e317e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.056191886Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e9ce8021-4d34-47a9-aae1-148ec62cef62 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.056880533Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b08f0833-8ddd-4fa8-bbfc-c47b34e5d923 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.057536891Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=65dc1a28-c124-4bed-86cd-bb1d6daa17da name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.058148081Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=948033be-9e28-424a-bbda-a82698af2fb7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:33.181479   21756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:33.182108   21756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:33.183648   21756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:33.184138   21756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:33.185673   21756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:56:33 up  3:38,  0 user,  load average: 0.20, 0.17, 0.32
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:56:30 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:31 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 02 21:56:31 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:31 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:31 functional-066896 kubelet[21566]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:31 functional-066896 kubelet[21566]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:31 functional-066896 kubelet[21566]: E1202 21:56:31.459550   21566 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:31 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:31 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 02 21:56:32 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:32 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:32 functional-066896 kubelet[21656]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:32 functional-066896 kubelet[21656]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:32 functional-066896 kubelet[21656]: E1202 21:56:32.209360   21656 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 02 21:56:32 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:32 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:32 functional-066896 kubelet[21702]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:32 functional-066896 kubelet[21702]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:32 functional-066896 kubelet[21702]: E1202 21:56:32.979600   21702 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (357.516594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-066896 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-066896 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (63.449181ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-066896 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (323.052258ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-218190 ssh pgrep buildkitd                                                                                                             │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ image   │ functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr                                            │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format yaml --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format json --alsologtostderr                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls --format table --alsologtostderr                                                                                       │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ image   │ functional-218190 image ls                                                                                                                        │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ delete  │ -p functional-218190                                                                                                                              │ functional-218190 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ start   │ -p functional-066896 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start   │ -p functional-066896 --alsologtostderr -v=8                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:37 UTC │                     │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add registry.k8s.io/pause:latest                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache add minikube-local-cache-test:functional-066896                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ functional-066896 cache delete minikube-local-cache-test:functional-066896                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl images                                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ cache   │ functional-066896 cache reload                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ ssh     │ functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │ 02 Dec 25 21:44 UTC │
	│ kubectl │ functional-066896 kubectl -- --context functional-066896 get pods                                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ start   │ -p functional-066896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:44:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:44:17.650988  488914 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:44:17.651127  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651131  488914 out.go:374] Setting ErrFile to fd 2...
	I1202 21:44:17.651134  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651388  488914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:44:17.651725  488914 out.go:368] Setting JSON to false
	I1202 21:44:17.652562  488914 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12386,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:44:17.652624  488914 start.go:143] virtualization:  
	I1202 21:44:17.655925  488914 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:44:17.658824  488914 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:44:17.658955  488914 notify.go:221] Checking for updates...
	I1202 21:44:17.664772  488914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:44:17.667672  488914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:44:17.670581  488914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:44:17.673492  488914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:44:17.676281  488914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:44:17.679520  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:17.679615  488914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:44:17.708368  488914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:44:17.708467  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.767956  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.759221256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.768046  488914 docker.go:319] overlay module found
	I1202 21:44:17.771104  488914 out.go:179] * Using the docker driver based on existing profile
	I1202 21:44:17.773889  488914 start.go:309] selected driver: docker
	I1202 21:44:17.773897  488914 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.773983  488914 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:44:17.774077  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.834934  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.825868601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.835402  488914 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:44:17.835426  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:17.835482  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:17.835523  488914 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.838587  488914 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:44:17.841458  488914 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:44:17.844370  488914 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:44:17.847200  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:17.847277  488914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:44:17.866587  488914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:44:17.866598  488914 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:44:17.909149  488914 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:44:18.073530  488914 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:44:18.073687  488914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:44:18.073803  488914 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073909  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:44:18.073917  488914 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.617µs
	I1202 21:44:18.073927  488914 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:44:18.073937  488914 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:44:18.073939  488914 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073964  488914 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073980  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:44:18.073986  488914 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.935µs
	I1202 21:44:18.073991  488914 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074001  488914 start.go:364] duration metric: took 25.551µs to acquireMachinesLock for "functional-066896"
	I1202 21:44:18.074000  488914 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074014  488914 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:44:18.074021  488914 fix.go:54] fixHost starting: 
	I1202 21:44:18.074029  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:44:18.074034  488914 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.037µs
	I1202 21:44:18.074039  488914 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074056  488914 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074084  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:44:18.074089  488914 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 41.329µs
	I1202 21:44:18.074093  488914 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074101  488914 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074151  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:44:18.074156  488914 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 55.623µs
	I1202 21:44:18.074160  488914 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074169  488914 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074193  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:44:18.074211  488914 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.457µs
	I1202 21:44:18.074217  488914 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:44:18.074232  488914 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074258  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:44:18.074262  488914 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 39.032µs
	I1202 21:44:18.074267  488914 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:44:18.074276  488914 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:44:18.074274  488914 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074311  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:44:18.074315  488914 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.174µs
	I1202 21:44:18.074320  488914 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:44:18.074327  488914 cache.go:87] Successfully saved all images to host disk.
	I1202 21:44:18.091506  488914 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:44:18.091527  488914 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:44:18.096748  488914 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:44:18.096772  488914 machine.go:94] provisionDockerMachine start ...
	I1202 21:44:18.096874  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.114456  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.114786  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.114793  488914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:44:18.266794  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.266809  488914 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:44:18.266875  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.286274  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.286575  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.286589  488914 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:44:18.448160  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.448232  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.466449  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.466766  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.466781  488914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:44:18.615365  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:44:18.615380  488914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:44:18.615404  488914 ubuntu.go:190] setting up certificates
	I1202 21:44:18.615412  488914 provision.go:84] configureAuth start
	I1202 21:44:18.615471  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:18.633069  488914 provision.go:143] copyHostCerts
	I1202 21:44:18.633141  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:44:18.633158  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:44:18.633234  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:44:18.633330  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:44:18.633334  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:44:18.633359  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:44:18.633406  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:44:18.633410  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:44:18.633430  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:44:18.633475  488914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:44:19.174279  488914 provision.go:177] copyRemoteCerts
	I1202 21:44:19.174331  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:44:19.174370  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.190978  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.294889  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:44:19.312628  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:44:19.330566  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:44:19.347713  488914 provision.go:87] duration metric: took 732.278587ms to configureAuth
	I1202 21:44:19.347730  488914 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:44:19.347935  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:19.348040  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.364877  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:19.365168  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:19.365182  488914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:44:19.733535  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:44:19.733548  488914 machine.go:97] duration metric: took 1.636769982s to provisionDockerMachine
	I1202 21:44:19.733558  488914 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:44:19.733570  488914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:44:19.733637  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:44:19.733700  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.752520  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.854929  488914 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:44:19.858053  488914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:44:19.858070  488914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:44:19.858080  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:44:19.858131  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:44:19.858206  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:44:19.858277  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:44:19.858317  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:44:19.865625  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:19.882511  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:44:19.899291  488914 start.go:296] duration metric: took 165.718396ms for postStartSetup
	I1202 21:44:19.899374  488914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:44:19.899409  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.915689  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.016990  488914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:44:20.022912  488914 fix.go:56] duration metric: took 1.948885968s for fixHost
	I1202 21:44:20.022943  488914 start.go:83] releasing machines lock for "functional-066896", held for 1.948933476s
	I1202 21:44:20.023059  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:20.041984  488914 ssh_runner.go:195] Run: cat /version.json
	I1202 21:44:20.042007  488914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:44:20.042033  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.042071  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.064148  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.064737  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.168080  488914 ssh_runner.go:195] Run: systemctl --version
	I1202 21:44:20.290437  488914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:44:20.326220  488914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:44:20.331076  488914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:44:20.331137  488914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:44:20.338791  488914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:44:20.338805  488914 start.go:496] detecting cgroup driver to use...
	I1202 21:44:20.338835  488914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:44:20.338881  488914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:44:20.354128  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:44:20.367183  488914 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:44:20.367236  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:44:20.383031  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:44:20.396225  488914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:44:20.505938  488914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:44:20.631853  488914 docker.go:234] disabling docker service ...
	I1202 21:44:20.631909  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:44:20.647481  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:44:20.660948  488914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:44:20.779859  488914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:44:20.901936  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:44:20.922332  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:44:20.937696  488914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:44:20.937766  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.947525  488914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:44:20.947591  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.956868  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.966757  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.976111  488914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:44:20.984116  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.993108  488914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.003934  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.015041  488914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:44:21.023179  488914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:44:21.030977  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.150076  488914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:44:21.327555  488914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:44:21.327622  488914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:44:21.331404  488914 start.go:564] Will wait 60s for crictl version
	I1202 21:44:21.331471  488914 ssh_runner.go:195] Run: which crictl
	I1202 21:44:21.335016  488914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:44:21.359060  488914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:44:21.359133  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.387110  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.420984  488914 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:44:21.423772  488914 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:44:21.440341  488914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:44:21.447237  488914 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:44:21.449900  488914 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:44:21.450046  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:21.450110  488914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:44:21.483620  488914 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:44:21.483631  488914 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:44:21.483637  488914 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:44:21.483726  488914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:44:21.483815  488914 ssh_runner.go:195] Run: crio config
	I1202 21:44:21.540157  488914 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:44:21.540183  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:21.540190  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:21.540200  488914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:44:21.540251  488914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:44:21.540412  488914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:44:21.540486  488914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:44:21.551296  488914 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:44:21.551378  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:44:21.559159  488914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:44:21.572470  488914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:44:21.586886  488914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 21:44:21.600852  488914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:44:21.604702  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.760401  488914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:44:22.412975  488914 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:44:22.412987  488914 certs.go:195] generating shared ca certs ...
	I1202 21:44:22.413002  488914 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:44:22.413155  488914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:44:22.413195  488914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:44:22.413201  488914 certs.go:257] generating profile certs ...
	I1202 21:44:22.413284  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:44:22.413360  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:44:22.413398  488914 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:44:22.413511  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:44:22.413543  488914 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:44:22.413552  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:44:22.413581  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:44:22.413604  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:44:22.413626  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:44:22.413674  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:22.414299  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:44:22.434951  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:44:22.453111  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:44:22.472098  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:44:22.493256  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:44:22.511523  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:44:22.529485  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:44:22.547667  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:44:22.565085  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:44:22.583650  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:44:22.601678  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:44:22.619263  488914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:44:22.631918  488914 ssh_runner.go:195] Run: openssl version
	I1202 21:44:22.638008  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:44:22.646246  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.649963  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.650030  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.691947  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:44:22.699744  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:44:22.707750  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711346  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711410  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.752553  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:44:22.760779  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:44:22.769102  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.772990  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.773054  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.817125  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:44:22.825521  488914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:44:22.829263  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:44:22.870268  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:44:22.912651  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:44:22.953793  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:44:22.994690  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:44:23.036128  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:44:23.077233  488914 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:23.077311  488914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:44:23.077384  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.104728  488914 cri.go:89] found id: ""
	I1202 21:44:23.104787  488914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:44:23.112693  488914 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:44:23.112702  488914 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:44:23.112754  488914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:44:23.120199  488914 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.120715  488914 kubeconfig.go:125] found "functional-066896" server: "https://192.168.49.2:8441"
	I1202 21:44:23.122004  488914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:44:23.129849  488914 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:29:46.719862797 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:44:21.596345133 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:44:23.129868  488914 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:44:23.129878  488914 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 21:44:23.129934  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.164567  488914 cri.go:89] found id: ""
	I1202 21:44:23.164629  488914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:44:23.192730  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:44:23.201193  488914 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5576 Dec  2 21:33 /etc/kubernetes/scheduler.conf
	
	I1202 21:44:23.201254  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:44:23.209100  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:44:23.217145  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.217201  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:44:23.224901  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.232713  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.232773  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.240473  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:44:23.248046  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.248102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:44:23.255508  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:44:23.263587  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:23.311842  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.167347  488914 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.855478015s)
	I1202 21:44:25.167416  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.367575  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.433420  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.478422  488914 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:44:25.478494  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:25.978693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.479461  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.978647  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.479295  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.979313  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.479548  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.979300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.478679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.479305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.979214  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.478682  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.979440  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.478676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.978971  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.478687  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.479399  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.978686  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.479541  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.979365  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.478985  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.978766  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.478652  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.979222  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.478642  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.979289  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.479367  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.978641  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.478896  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.479195  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.979035  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.478597  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.978688  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.478820  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.979413  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.478702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.979325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.478716  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.979514  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.479502  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.978679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.479602  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.978676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.979208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.479262  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.978947  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.478848  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.979340  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.478943  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.979631  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.479208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.978824  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.478692  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.978621  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.479381  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.979217  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.479300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.979309  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.478661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.978590  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.478589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.979149  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.479524  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.979613  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.979556  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.479181  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.479560  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.979258  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.478693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.979403  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.479145  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.979083  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.478795  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.979236  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.478753  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.479607  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.479438  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.978717  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.478907  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.979407  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.478991  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.979216  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.479168  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.979304  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.479589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.979207  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.478756  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.979408  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.979186  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.478671  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.979155  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.478781  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.478767  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.978709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.478610  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.979395  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.479136  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.978666  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.479565  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.978675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.979164  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.478675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.978579  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:25.479540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:25.479652  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:25.504711  488914 cri.go:89] found id: ""
	I1202 21:45:25.504725  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.504732  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:25.504738  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:25.504795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:25.529752  488914 cri.go:89] found id: ""
	I1202 21:45:25.529766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.529773  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:25.529778  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:25.529838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:25.555068  488914 cri.go:89] found id: ""
	I1202 21:45:25.555082  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.555089  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:25.555095  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:25.555154  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:25.583996  488914 cri.go:89] found id: ""
	I1202 21:45:25.584010  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.584017  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:25.584023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:25.584083  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:25.613039  488914 cri.go:89] found id: ""
	I1202 21:45:25.613053  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.613060  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:25.613065  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:25.613125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:25.638912  488914 cri.go:89] found id: ""
	I1202 21:45:25.638926  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.638933  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:25.638938  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:25.639016  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:25.663753  488914 cri.go:89] found id: ""
	I1202 21:45:25.663766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.663773  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:25.663781  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:25.663793  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:25.693023  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:25.693040  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:25.759763  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:25.759782  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:25.774658  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:25.774679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:25.838644  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:25.838656  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:25.838667  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.417551  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:28.428847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:28.428924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:28.461391  488914 cri.go:89] found id: ""
	I1202 21:45:28.461406  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.461413  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:28.461418  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:28.461487  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:28.493536  488914 cri.go:89] found id: ""
	I1202 21:45:28.493549  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.493556  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:28.493561  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:28.493625  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:28.521334  488914 cri.go:89] found id: ""
	I1202 21:45:28.521347  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.521354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:28.521360  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:28.521429  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:28.546459  488914 cri.go:89] found id: ""
	I1202 21:45:28.546472  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.546479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:28.546484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:28.546558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:28.573310  488914 cri.go:89] found id: ""
	I1202 21:45:28.573325  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.573332  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:28.573338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:28.573398  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:28.603231  488914 cri.go:89] found id: ""
	I1202 21:45:28.603245  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.603252  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:28.603259  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:28.603339  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:28.628995  488914 cri.go:89] found id: ""
	I1202 21:45:28.629009  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.629016  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:28.629024  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:28.629034  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:28.694293  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:28.694315  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:28.709309  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:28.709326  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:28.772742  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:28.772763  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:28.772775  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.851065  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:28.851099  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.383921  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:31.394465  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:31.394529  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:31.432030  488914 cri.go:89] found id: ""
	I1202 21:45:31.432046  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.432053  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:31.432061  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:31.432122  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:31.469314  488914 cri.go:89] found id: ""
	I1202 21:45:31.469327  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.469334  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:31.469339  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:31.469399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:31.495701  488914 cri.go:89] found id: ""
	I1202 21:45:31.495715  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.495721  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:31.495726  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:31.495783  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:31.525459  488914 cri.go:89] found id: ""
	I1202 21:45:31.525472  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.525479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:31.525484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:31.525548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:31.551543  488914 cri.go:89] found id: ""
	I1202 21:45:31.551557  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.551564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:31.551569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:31.551635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:31.576459  488914 cri.go:89] found id: ""
	I1202 21:45:31.576473  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.576479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:31.576485  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:31.576543  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:31.605711  488914 cri.go:89] found id: ""
	I1202 21:45:31.605726  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.605733  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:31.605741  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:31.605752  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.637077  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:31.637094  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:31.704571  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:31.704592  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:31.719615  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:31.719640  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:31.784987  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:31.785007  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:31.785019  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.367127  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:34.377127  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:34.377203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:34.402736  488914 cri.go:89] found id: ""
	I1202 21:45:34.402750  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.402757  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:34.402769  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:34.402864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:34.443728  488914 cri.go:89] found id: ""
	I1202 21:45:34.443742  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.443749  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:34.443754  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:34.443815  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:34.479956  488914 cri.go:89] found id: ""
	I1202 21:45:34.479970  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.479985  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:34.479991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:34.480055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:34.508482  488914 cri.go:89] found id: ""
	I1202 21:45:34.508503  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.508510  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:34.508516  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:34.508573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:34.534801  488914 cri.go:89] found id: ""
	I1202 21:45:34.534814  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.534821  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:34.534826  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:34.534884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:34.559463  488914 cri.go:89] found id: ""
	I1202 21:45:34.559477  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.559484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:34.559490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:34.559551  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:34.584528  488914 cri.go:89] found id: ""
	I1202 21:45:34.584543  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.584550  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:34.584557  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:34.584568  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:34.651241  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:34.651261  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:34.666228  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:34.666244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:34.728086  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:34.728108  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:34.728120  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.804348  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:34.804369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:37.332022  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:37.341829  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:37.341888  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:37.366064  488914 cri.go:89] found id: ""
	I1202 21:45:37.366078  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.366085  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:37.366090  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:37.366147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:37.395570  488914 cri.go:89] found id: ""
	I1202 21:45:37.395584  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.395590  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:37.395595  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:37.395663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:37.429125  488914 cri.go:89] found id: ""
	I1202 21:45:37.429140  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.429147  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:37.429161  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:37.429218  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:37.462030  488914 cri.go:89] found id: ""
	I1202 21:45:37.462054  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.462062  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:37.462080  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:37.462152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:37.490229  488914 cri.go:89] found id: ""
	I1202 21:45:37.490242  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.490260  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:37.490266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:37.490349  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:37.515496  488914 cri.go:89] found id: ""
	I1202 21:45:37.515510  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.515516  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:37.515522  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:37.515578  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:37.544546  488914 cri.go:89] found id: ""
	I1202 21:45:37.544560  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.544567  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:37.544575  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:37.544586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:37.617995  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:37.618023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:37.634282  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:37.634307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:37.704089  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:37.704099  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:37.704110  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:37.780382  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:37.780402  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.308261  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:40.318898  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:40.318954  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:40.351388  488914 cri.go:89] found id: ""
	I1202 21:45:40.351403  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.351409  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:40.351415  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:40.351476  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:40.376844  488914 cri.go:89] found id: ""
	I1202 21:45:40.376857  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.376864  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:40.376869  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:40.376927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:40.400732  488914 cri.go:89] found id: ""
	I1202 21:45:40.400745  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.400752  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:40.400757  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:40.400816  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:40.446048  488914 cri.go:89] found id: ""
	I1202 21:45:40.446061  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.446067  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:40.446075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:40.446134  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:40.475997  488914 cri.go:89] found id: ""
	I1202 21:45:40.476011  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.476018  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:40.476023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:40.476081  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:40.501615  488914 cri.go:89] found id: ""
	I1202 21:45:40.501629  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.501636  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:40.501642  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:40.501705  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:40.526763  488914 cri.go:89] found id: ""
	I1202 21:45:40.526809  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.526816  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:40.526831  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:40.526842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:40.542072  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:40.542088  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:40.603416  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:40.603427  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:40.603437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:40.683775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:40.683797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.710561  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:40.710577  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.275783  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:43.286075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:43.286135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:43.312011  488914 cri.go:89] found id: ""
	I1202 21:45:43.312026  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.312033  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:43.312039  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:43.312099  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:43.337316  488914 cri.go:89] found id: ""
	I1202 21:45:43.337330  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.337337  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:43.337359  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:43.337418  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:43.369627  488914 cri.go:89] found id: ""
	I1202 21:45:43.369641  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.369648  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:43.369653  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:43.369714  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:43.395672  488914 cri.go:89] found id: ""
	I1202 21:45:43.395686  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.395693  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:43.395698  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:43.395757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:43.436721  488914 cri.go:89] found id: ""
	I1202 21:45:43.436735  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.436742  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:43.436747  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:43.436808  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:43.468979  488914 cri.go:89] found id: ""
	I1202 21:45:43.468993  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.469008  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:43.469014  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:43.469084  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:43.500825  488914 cri.go:89] found id: ""
	I1202 21:45:43.500839  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.500846  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:43.500854  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:43.500864  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:43.537110  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:43.537127  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.604154  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:43.604172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:43.619529  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:43.619546  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:43.684232  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:43.684242  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:43.684253  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.262533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:46.273030  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:46.273094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:46.298023  488914 cri.go:89] found id: ""
	I1202 21:45:46.298039  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.298045  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:46.298051  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:46.298109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:46.327737  488914 cri.go:89] found id: ""
	I1202 21:45:46.327752  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.327760  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:46.327769  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:46.327834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:46.353980  488914 cri.go:89] found id: ""
	I1202 21:45:46.353994  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.354003  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:46.354008  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:46.354073  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:46.380386  488914 cri.go:89] found id: ""
	I1202 21:45:46.380400  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.380406  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:46.380412  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:46.380480  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:46.406595  488914 cri.go:89] found id: ""
	I1202 21:45:46.406609  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.406616  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:46.406621  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:46.406679  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:46.441216  488914 cri.go:89] found id: ""
	I1202 21:45:46.441230  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.441237  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:46.441242  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:46.441305  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:46.473258  488914 cri.go:89] found id: ""
	I1202 21:45:46.473272  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.473279  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:46.473287  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:46.473298  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:46.490441  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:46.490458  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:46.554481  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:46.554490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:46.554501  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.631777  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:46.631800  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:46.660339  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:46.660355  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:49.231885  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:49.243758  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:49.243823  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:49.268714  488914 cri.go:89] found id: ""
	I1202 21:45:49.268728  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.268735  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:49.268741  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:49.268799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:49.293827  488914 cri.go:89] found id: ""
	I1202 21:45:49.293842  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.293849  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:49.293854  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:49.293919  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:49.319633  488914 cri.go:89] found id: ""
	I1202 21:45:49.319647  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.319654  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:49.319661  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:49.319720  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:49.350167  488914 cri.go:89] found id: ""
	I1202 21:45:49.350181  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.350188  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:49.350193  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:49.350252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:49.375814  488914 cri.go:89] found id: ""
	I1202 21:45:49.375828  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.375835  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:49.375841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:49.375905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:49.400638  488914 cri.go:89] found id: ""
	I1202 21:45:49.400657  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.400664  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:49.400670  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:49.400727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:49.453654  488914 cri.go:89] found id: ""
	I1202 21:45:49.453668  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.453680  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:49.453689  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:49.453699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:49.479146  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:49.479161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:49.548448  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:49.548457  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:49.548468  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:49.628739  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:49.628759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:49.658161  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:49.658177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:52.223612  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:52.234793  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:52.234899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:52.265577  488914 cri.go:89] found id: ""
	I1202 21:45:52.265591  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.265598  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:52.265603  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:52.265663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:52.292373  488914 cri.go:89] found id: ""
	I1202 21:45:52.292387  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.292394  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:52.292399  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:52.292466  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:52.317157  488914 cri.go:89] found id: ""
	I1202 21:45:52.317171  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.317178  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:52.317183  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:52.317240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:52.347843  488914 cri.go:89] found id: ""
	I1202 21:45:52.347856  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.347863  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:52.347868  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:52.347927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:52.372874  488914 cri.go:89] found id: ""
	I1202 21:45:52.372889  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.372895  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:52.372900  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:52.372962  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:52.398247  488914 cri.go:89] found id: ""
	I1202 21:45:52.398260  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.398267  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:52.398273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:52.398330  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:52.445693  488914 cri.go:89] found id: ""
	I1202 21:45:52.445706  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.445713  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:52.445721  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:52.445732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:52.465150  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:52.465167  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:52.540766  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:52.540776  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:52.540797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:52.618862  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:52.618882  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:52.648548  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:52.648565  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.221074  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:55.231158  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:55.231215  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:55.256269  488914 cri.go:89] found id: ""
	I1202 21:45:55.256282  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.256289  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:55.256294  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:55.256371  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:55.281345  488914 cri.go:89] found id: ""
	I1202 21:45:55.281360  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.281367  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:55.281372  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:55.281430  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:55.306779  488914 cri.go:89] found id: ""
	I1202 21:45:55.306793  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.306799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:55.306805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:55.306865  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:55.333304  488914 cri.go:89] found id: ""
	I1202 21:45:55.333318  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.333325  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:55.333333  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:55.333393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:55.358550  488914 cri.go:89] found id: ""
	I1202 21:45:55.358563  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.358570  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:55.358575  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:55.358638  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:55.387929  488914 cri.go:89] found id: ""
	I1202 21:45:55.387943  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.387951  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:55.387957  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:55.388020  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:55.426649  488914 cri.go:89] found id: ""
	I1202 21:45:55.426663  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.426670  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:55.426678  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:55.426687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:55.519746  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:55.519772  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:55.554225  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:55.554241  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.622464  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:55.622484  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:55.638187  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:55.638213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:55.703154  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.203385  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:58.213686  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:58.213750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:58.239330  488914 cri.go:89] found id: ""
	I1202 21:45:58.239344  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.239351  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:58.239356  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:58.239416  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:58.264371  488914 cri.go:89] found id: ""
	I1202 21:45:58.264385  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.264392  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:58.264397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:58.264454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:58.289420  488914 cri.go:89] found id: ""
	I1202 21:45:58.289434  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.289441  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:58.289446  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:58.289504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:58.317750  488914 cri.go:89] found id: ""
	I1202 21:45:58.317764  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.317772  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:58.317777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:58.317834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:58.341672  488914 cri.go:89] found id: ""
	I1202 21:45:58.341687  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.341694  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:58.341699  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:58.341764  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:58.366074  488914 cri.go:89] found id: ""
	I1202 21:45:58.366088  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.366094  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:58.366099  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:58.366160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:58.390704  488914 cri.go:89] found id: ""
	I1202 21:45:58.390718  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.390724  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:58.390741  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:58.390751  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:58.474575  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.474586  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:58.474598  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:58.558574  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:58.558604  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:58.589663  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:58.589680  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:58.656150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:58.656169  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.173977  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:01.186201  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:01.186270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:01.213408  488914 cri.go:89] found id: ""
	I1202 21:46:01.213424  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.213430  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:01.213436  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:01.213502  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:01.239993  488914 cri.go:89] found id: ""
	I1202 21:46:01.240007  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.240014  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:01.240019  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:01.240079  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:01.266106  488914 cri.go:89] found id: ""
	I1202 21:46:01.266120  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.266127  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:01.266132  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:01.266194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:01.292600  488914 cri.go:89] found id: ""
	I1202 21:46:01.292614  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.292621  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:01.292627  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:01.292689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:01.318438  488914 cri.go:89] found id: ""
	I1202 21:46:01.318453  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.318460  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:01.318466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:01.318530  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:01.344830  488914 cri.go:89] found id: ""
	I1202 21:46:01.344843  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.344850  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:01.344856  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:01.344914  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:01.370509  488914 cri.go:89] found id: ""
	I1202 21:46:01.370523  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.370534  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:01.370541  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:01.370551  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:01.400108  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:01.400123  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:01.484583  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:01.484603  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.501311  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:01.501329  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:01.571182  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:01.571193  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:01.571204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.148935  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:04.159286  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:04.159346  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:04.191266  488914 cri.go:89] found id: ""
	I1202 21:46:04.191279  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.191286  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:04.191291  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:04.191350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:04.217195  488914 cri.go:89] found id: ""
	I1202 21:46:04.217209  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.217216  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:04.217221  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:04.217285  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:04.243674  488914 cri.go:89] found id: ""
	I1202 21:46:04.243689  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.243696  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:04.243701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:04.243760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:04.269892  488914 cri.go:89] found id: ""
	I1202 21:46:04.269905  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.269921  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:04.269927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:04.269998  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:04.296688  488914 cri.go:89] found id: ""
	I1202 21:46:04.296703  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.296711  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:04.296717  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:04.296785  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:04.322967  488914 cri.go:89] found id: ""
	I1202 21:46:04.322981  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.323017  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:04.323023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:04.323091  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:04.348936  488914 cri.go:89] found id: ""
	I1202 21:46:04.348956  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.348963  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:04.348972  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:04.348981  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:04.415190  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:04.415209  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:04.431456  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:04.431472  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:04.504661  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:04.504671  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:04.504682  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.581468  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:04.581487  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:07.110404  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:07.120667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:07.120727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:07.145924  488914 cri.go:89] found id: ""
	I1202 21:46:07.145938  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.145945  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:07.145950  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:07.146010  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:07.171187  488914 cri.go:89] found id: ""
	I1202 21:46:07.171200  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.171207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:07.171212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:07.171270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:07.197187  488914 cri.go:89] found id: ""
	I1202 21:46:07.197201  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.197208  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:07.197213  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:07.197272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:07.222713  488914 cri.go:89] found id: ""
	I1202 21:46:07.222728  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.222735  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:07.222740  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:07.222800  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:07.249213  488914 cri.go:89] found id: ""
	I1202 21:46:07.249226  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.249233  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:07.249239  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:07.249301  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:07.275464  488914 cri.go:89] found id: ""
	I1202 21:46:07.275478  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.275484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:07.275490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:07.275546  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:07.305137  488914 cri.go:89] found id: ""
	I1202 21:46:07.305151  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.305166  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:07.305174  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:07.305187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:07.370440  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:07.370459  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:07.386336  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:07.386354  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:07.458373  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:07.458383  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:07.458395  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:07.542802  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:07.542822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:10.076833  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:10.087724  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:10.087819  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:10.114700  488914 cri.go:89] found id: ""
	I1202 21:46:10.114714  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.114722  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:10.114728  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:10.114794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:10.140632  488914 cri.go:89] found id: ""
	I1202 21:46:10.140646  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.140652  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:10.140658  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:10.140715  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:10.169820  488914 cri.go:89] found id: ""
	I1202 21:46:10.169834  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.169841  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:10.169850  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:10.169911  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:10.195172  488914 cri.go:89] found id: ""
	I1202 21:46:10.195186  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.195193  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:10.195199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:10.195262  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:10.229303  488914 cri.go:89] found id: ""
	I1202 21:46:10.229317  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.229324  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:10.229330  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:10.229392  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:10.257081  488914 cri.go:89] found id: ""
	I1202 21:46:10.257096  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.257102  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:10.257108  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:10.257168  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:10.283246  488914 cri.go:89] found id: ""
	I1202 21:46:10.283259  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.283267  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:10.283274  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:10.283284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:10.351168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:10.351187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:10.366368  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:10.366385  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:10.438623  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:10.438633  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:10.438646  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:10.516775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:10.516796  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:13.045661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:13.056197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:13.056259  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:13.087662  488914 cri.go:89] found id: ""
	I1202 21:46:13.087675  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.087682  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:13.087688  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:13.087748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:13.113347  488914 cri.go:89] found id: ""
	I1202 21:46:13.113361  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.113368  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:13.113373  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:13.113432  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:13.139083  488914 cri.go:89] found id: ""
	I1202 21:46:13.139098  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.139105  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:13.139110  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:13.139181  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:13.165107  488914 cri.go:89] found id: ""
	I1202 21:46:13.165121  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.165128  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:13.165133  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:13.165196  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:13.190075  488914 cri.go:89] found id: ""
	I1202 21:46:13.190090  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.190107  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:13.190113  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:13.190180  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:13.219255  488914 cri.go:89] found id: ""
	I1202 21:46:13.219269  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.219276  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:13.219281  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:13.219342  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:13.245328  488914 cri.go:89] found id: ""
	I1202 21:46:13.245342  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.245350  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:13.245358  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:13.245369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:13.310150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:13.310168  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:13.325530  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:13.325550  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:13.389916  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:13.389926  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:13.389938  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:13.474064  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:13.474083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:16.007285  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:16.018077  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:16.018147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:16.048444  488914 cri.go:89] found id: ""
	I1202 21:46:16.048458  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.048465  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:16.048477  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:16.048539  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:16.075066  488914 cri.go:89] found id: ""
	I1202 21:46:16.075079  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.075085  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:16.075090  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:16.075152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:16.100648  488914 cri.go:89] found id: ""
	I1202 21:46:16.100662  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.100669  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:16.100674  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:16.100732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:16.131449  488914 cri.go:89] found id: ""
	I1202 21:46:16.131463  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.131470  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:16.131475  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:16.131534  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:16.158249  488914 cri.go:89] found id: ""
	I1202 21:46:16.158263  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.158270  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:16.158276  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:16.158340  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:16.183613  488914 cri.go:89] found id: ""
	I1202 21:46:16.183627  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.183633  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:16.183641  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:16.183702  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:16.209461  488914 cri.go:89] found id: ""
	I1202 21:46:16.209475  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.209483  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:16.209490  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:16.209500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:16.275500  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:16.275520  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:16.291181  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:16.291196  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:16.361346  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:16.361356  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:16.361368  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:16.437676  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:16.437697  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:18.967950  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:18.977983  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:18.978057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:19.007682  488914 cri.go:89] found id: ""
	I1202 21:46:19.007706  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.007714  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:19.007720  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:19.007794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:19.033939  488914 cri.go:89] found id: ""
	I1202 21:46:19.033961  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.033969  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:19.033975  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:19.034042  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:19.059516  488914 cri.go:89] found id: ""
	I1202 21:46:19.059531  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.059544  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:19.059550  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:19.059616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:19.086051  488914 cri.go:89] found id: ""
	I1202 21:46:19.086065  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.086072  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:19.086078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:19.086135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:19.110886  488914 cri.go:89] found id: ""
	I1202 21:46:19.110899  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.110906  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:19.110911  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:19.110969  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:19.137589  488914 cri.go:89] found id: ""
	I1202 21:46:19.137603  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.137610  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:19.137615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:19.137673  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:19.162755  488914 cri.go:89] found id: ""
	I1202 21:46:19.162769  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.162776  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:19.162784  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:19.162794  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:19.189873  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:19.189888  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:19.255357  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:19.255375  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:19.270844  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:19.270861  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:19.340061  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:19.340072  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:19.340089  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:21.925504  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:21.935839  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:21.935899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:21.960350  488914 cri.go:89] found id: ""
	I1202 21:46:21.960363  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.960370  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:21.960375  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:21.960434  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:21.986080  488914 cri.go:89] found id: ""
	I1202 21:46:21.986097  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.986105  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:21.986112  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:21.986174  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:22.014687  488914 cri.go:89] found id: ""
	I1202 21:46:22.014702  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.014709  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:22.014715  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:22.014778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:22.042230  488914 cri.go:89] found id: ""
	I1202 21:46:22.042245  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.042252  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:22.042257  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:22.042320  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:22.072112  488914 cri.go:89] found id: ""
	I1202 21:46:22.072126  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.072134  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:22.072139  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:22.072210  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:22.098531  488914 cri.go:89] found id: ""
	I1202 21:46:22.098555  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.098562  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:22.098568  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:22.098649  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:22.124074  488914 cri.go:89] found id: ""
	I1202 21:46:22.124088  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.124095  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:22.124102  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:22.124112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:22.190291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:22.190311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:22.205264  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:22.205283  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:22.273286  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:22.273308  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:22.273321  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:22.349070  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:22.349090  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:24.882662  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:24.893199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:24.893260  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:24.918892  488914 cri.go:89] found id: ""
	I1202 21:46:24.918906  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.918913  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:24.918918  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:24.918977  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:24.944030  488914 cri.go:89] found id: ""
	I1202 21:46:24.944043  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.944050  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:24.944055  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:24.944115  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:24.969743  488914 cri.go:89] found id: ""
	I1202 21:46:24.969758  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.969765  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:24.969770  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:24.969827  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:25.003432  488914 cri.go:89] found id: ""
	I1202 21:46:25.003449  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.003459  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:25.003466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:25.003573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:25.030965  488914 cri.go:89] found id: ""
	I1202 21:46:25.030979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.030985  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:25.030991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:25.031072  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:25.057965  488914 cri.go:89] found id: ""
	I1202 21:46:25.057979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.057986  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:25.057991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:25.058048  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:25.085099  488914 cri.go:89] found id: ""
	I1202 21:46:25.085113  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.085129  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:25.085137  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:25.085147  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:25.115538  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:25.115553  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:25.181412  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:25.181432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:25.196691  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:25.196712  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:25.261474  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:25.261490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:25.261500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:27.838685  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:27.849142  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:27.849203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:27.874519  488914 cri.go:89] found id: ""
	I1202 21:46:27.874533  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.874539  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:27.874545  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:27.874603  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:27.900185  488914 cri.go:89] found id: ""
	I1202 21:46:27.900198  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.900207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:27.900212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:27.900270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:27.926179  488914 cri.go:89] found id: ""
	I1202 21:46:27.926202  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.926209  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:27.926215  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:27.926280  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:27.951950  488914 cri.go:89] found id: ""
	I1202 21:46:27.951964  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.951971  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:27.951977  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:27.952034  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:27.976779  488914 cri.go:89] found id: ""
	I1202 21:46:27.976793  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.976799  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:27.976804  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:27.976864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:28.013447  488914 cri.go:89] found id: ""
	I1202 21:46:28.013462  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.013479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:28.013495  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:28.013562  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:28.041485  488914 cri.go:89] found id: ""
	I1202 21:46:28.041508  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.041516  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:28.041524  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:28.041536  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:28.057180  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:28.057197  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:28.121537  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:28.121548  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:28.121559  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:28.197190  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:28.197210  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:28.229525  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:28.229541  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:30.795826  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:30.806266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:30.806329  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:30.834208  488914 cri.go:89] found id: ""
	I1202 21:46:30.834222  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.834229  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:30.834234  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:30.834293  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:30.859664  488914 cri.go:89] found id: ""
	I1202 21:46:30.859678  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.859685  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:30.859690  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:30.859748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:30.889034  488914 cri.go:89] found id: ""
	I1202 21:46:30.889048  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.889055  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:30.889061  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:30.889117  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:30.914676  488914 cri.go:89] found id: ""
	I1202 21:46:30.914689  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.914696  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:30.914701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:30.914759  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:30.939761  488914 cri.go:89] found id: ""
	I1202 21:46:30.939774  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.939782  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:30.939787  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:30.939843  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:30.965463  488914 cri.go:89] found id: ""
	I1202 21:46:30.965476  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.965483  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:30.965488  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:30.965545  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:30.990187  488914 cri.go:89] found id: ""
	I1202 21:46:30.990200  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.990206  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:30.990224  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:30.990236  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:31.005797  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:31.005813  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:31.069684  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:31.069694  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:31.069707  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:31.145787  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:31.145809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:31.178743  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:31.178759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:33.744496  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:33.754580  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:33.754651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:33.779528  488914 cri.go:89] found id: ""
	I1202 21:46:33.779541  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.779548  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:33.779554  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:33.779616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:33.804198  488914 cri.go:89] found id: ""
	I1202 21:46:33.804212  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.804219  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:33.804227  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:33.804289  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:33.829645  488914 cri.go:89] found id: ""
	I1202 21:46:33.829659  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.829666  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:33.829675  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:33.829734  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:33.858338  488914 cri.go:89] found id: ""
	I1202 21:46:33.858352  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.858368  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:33.858375  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:33.858433  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:33.884555  488914 cri.go:89] found id: ""
	I1202 21:46:33.884570  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.884578  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:33.884583  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:33.884651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:33.912967  488914 cri.go:89] found id: ""
	I1202 21:46:33.912981  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.912988  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:33.912994  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:33.913055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:33.938088  488914 cri.go:89] found id: ""
	I1202 21:46:33.938102  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.938110  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:33.938118  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:33.938133  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:34.003604  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:34.003631  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:34.022128  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:34.022146  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:34.092004  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:34.092015  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:34.092029  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:34.169499  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:34.169519  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:36.700051  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:36.711435  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:36.711497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:36.738690  488914 cri.go:89] found id: ""
	I1202 21:46:36.738704  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.738711  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:36.738717  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:36.738776  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:36.765789  488914 cri.go:89] found id: ""
	I1202 21:46:36.765802  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.765810  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:36.765815  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:36.765880  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:36.790056  488914 cri.go:89] found id: ""
	I1202 21:46:36.790070  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.790077  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:36.790082  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:36.790138  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:36.818201  488914 cri.go:89] found id: ""
	I1202 21:46:36.818214  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.818221  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:36.818227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:36.818288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:36.845623  488914 cri.go:89] found id: ""
	I1202 21:46:36.845637  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.845644  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:36.845650  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:36.845710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:36.871336  488914 cri.go:89] found id: ""
	I1202 21:46:36.871350  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.871357  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:36.871362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:36.871427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:36.897589  488914 cri.go:89] found id: ""
	I1202 21:46:36.897605  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.897611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:36.897619  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:36.897630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:36.913198  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:36.913213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:36.973711  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:36.973721  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:36.973732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:37.054868  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:37.054889  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:37.083961  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:37.083976  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.651305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:39.662125  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:39.662189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:39.693251  488914 cri.go:89] found id: ""
	I1202 21:46:39.693264  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.693271  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:39.693277  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:39.693333  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:39.720953  488914 cri.go:89] found id: ""
	I1202 21:46:39.720969  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.720976  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:39.720981  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:39.721039  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:39.747423  488914 cri.go:89] found id: ""
	I1202 21:46:39.747436  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.747443  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:39.747448  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:39.747512  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:39.773314  488914 cri.go:89] found id: ""
	I1202 21:46:39.773328  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.773335  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:39.773340  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:39.773396  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:39.801946  488914 cri.go:89] found id: ""
	I1202 21:46:39.801960  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.801966  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:39.801971  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:39.802027  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:39.831169  488914 cri.go:89] found id: ""
	I1202 21:46:39.831182  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.831189  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:39.831195  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:39.831255  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:39.855958  488914 cri.go:89] found id: ""
	I1202 21:46:39.855972  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.855979  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:39.855987  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:39.855997  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.921041  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:39.921076  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:39.936417  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:39.936433  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:40.005449  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:40.005465  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:40.005479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:40.099731  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:40.099754  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.632158  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:42.642592  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:42.642655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:42.680753  488914 cri.go:89] found id: ""
	I1202 21:46:42.680767  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.680774  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:42.680780  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:42.680845  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:42.727033  488914 cri.go:89] found id: ""
	I1202 21:46:42.727047  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.727056  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:42.727062  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:42.727125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:42.753808  488914 cri.go:89] found id: ""
	I1202 21:46:42.753822  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.753829  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:42.753848  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:42.753906  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:42.782178  488914 cri.go:89] found id: ""
	I1202 21:46:42.782192  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.782200  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:42.782206  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:42.782272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:42.807839  488914 cri.go:89] found id: ""
	I1202 21:46:42.807853  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.807860  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:42.807867  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:42.807927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:42.834250  488914 cri.go:89] found id: ""
	I1202 21:46:42.834276  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.834283  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:42.834290  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:42.834355  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:42.861699  488914 cri.go:89] found id: ""
	I1202 21:46:42.861721  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.861728  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:42.861736  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:42.861747  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:42.937587  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:42.937608  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.969352  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:42.969374  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:43.035113  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:43.035138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:43.050909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:43.050924  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:43.116601  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.616905  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:45.627026  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:45.627089  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:45.653296  488914 cri.go:89] found id: ""
	I1202 21:46:45.653311  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.653318  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:45.653323  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:45.653389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:45.685320  488914 cri.go:89] found id: ""
	I1202 21:46:45.685334  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.685342  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:45.685347  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:45.685407  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:45.714439  488914 cri.go:89] found id: ""
	I1202 21:46:45.714453  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.714460  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:45.714466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:45.714524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:45.741650  488914 cri.go:89] found id: ""
	I1202 21:46:45.741665  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.741672  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:45.741678  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:45.741748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:45.768339  488914 cri.go:89] found id: ""
	I1202 21:46:45.768374  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.768381  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:45.768387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:45.768446  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:45.793382  488914 cri.go:89] found id: ""
	I1202 21:46:45.793396  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.793404  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:45.793410  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:45.793470  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:45.821520  488914 cri.go:89] found id: ""
	I1202 21:46:45.821534  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.821541  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:45.821549  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:45.821560  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:45.836636  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:45.836657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:45.903141  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.903152  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:45.903182  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:45.983151  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:45.983172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:46.016509  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:46.016525  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:48.589533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:48.600004  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:48.600063  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:48.624724  488914 cri.go:89] found id: ""
	I1202 21:46:48.624738  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.624745  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:48.624751  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:48.624809  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:48.649307  488914 cri.go:89] found id: ""
	I1202 21:46:48.649322  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.649329  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:48.649335  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:48.649393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:48.689464  488914 cri.go:89] found id: ""
	I1202 21:46:48.689477  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.689484  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:48.689489  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:48.689548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:48.718180  488914 cri.go:89] found id: ""
	I1202 21:46:48.718195  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.718202  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:48.718207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:48.718274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:48.748759  488914 cri.go:89] found id: ""
	I1202 21:46:48.748773  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.748781  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:48.748786  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:48.748847  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:48.773610  488914 cri.go:89] found id: ""
	I1202 21:46:48.773624  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.773631  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:48.773637  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:48.773694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:48.798539  488914 cri.go:89] found id: ""
	I1202 21:46:48.798553  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.798560  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:48.798568  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:48.798580  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:48.813434  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:48.813450  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:48.873005  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:48.873016  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:48.873027  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:48.949124  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:48.949143  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:48.981243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:48.981259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.549061  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:51.558950  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:51.559026  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:51.583587  488914 cri.go:89] found id: ""
	I1202 21:46:51.583601  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.583608  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:51.583614  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:51.583674  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:51.609150  488914 cri.go:89] found id: ""
	I1202 21:46:51.609163  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.609170  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:51.609175  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:51.609237  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:51.634897  488914 cri.go:89] found id: ""
	I1202 21:46:51.634910  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.634917  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:51.634922  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:51.634980  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:51.665746  488914 cri.go:89] found id: ""
	I1202 21:46:51.665760  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.665766  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:51.665772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:51.665830  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:51.704219  488914 cri.go:89] found id: ""
	I1202 21:46:51.704233  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.704240  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:51.704246  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:51.704310  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:51.736171  488914 cri.go:89] found id: ""
	I1202 21:46:51.736194  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.736202  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:51.736207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:51.736274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:51.765446  488914 cri.go:89] found id: ""
	I1202 21:46:51.765469  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.765476  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:51.765484  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:51.765494  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:51.792551  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:51.792566  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.857688  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:51.857706  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:51.873199  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:51.873214  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:51.942299  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:51.942311  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:51.942323  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.519031  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:54.529427  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:54.529497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:54.558708  488914 cri.go:89] found id: ""
	I1202 21:46:54.558722  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.558729  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:54.558735  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:54.558796  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:54.583135  488914 cri.go:89] found id: ""
	I1202 21:46:54.583148  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.583155  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:54.583160  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:54.583221  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:54.609361  488914 cri.go:89] found id: ""
	I1202 21:46:54.609382  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.609390  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:54.609396  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:54.609461  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:54.637663  488914 cri.go:89] found id: ""
	I1202 21:46:54.637677  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.637683  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:54.637691  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:54.637748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:54.666901  488914 cri.go:89] found id: ""
	I1202 21:46:54.666915  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.666922  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:54.666927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:54.666987  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:54.695329  488914 cri.go:89] found id: ""
	I1202 21:46:54.695343  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.695350  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:54.695355  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:54.695413  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:54.724947  488914 cri.go:89] found id: ""
	I1202 21:46:54.724961  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.724967  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:54.724975  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:54.724986  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:54.742963  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:54.742980  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:54.810513  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:54.810523  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:54.810534  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.883552  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:54.883571  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:54.911389  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:54.911406  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.481762  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:57.492870  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:57.492930  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:57.517199  488914 cri.go:89] found id: ""
	I1202 21:46:57.517213  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.517220  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:57.517225  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:57.517292  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:57.543039  488914 cri.go:89] found id: ""
	I1202 21:46:57.543053  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.543060  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:57.543066  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:57.543130  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:57.567509  488914 cri.go:89] found id: ""
	I1202 21:46:57.567524  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.567530  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:57.567536  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:57.567597  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:57.593052  488914 cri.go:89] found id: ""
	I1202 21:46:57.593074  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.593081  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:57.593087  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:57.593151  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:57.618537  488914 cri.go:89] found id: ""
	I1202 21:46:57.618551  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.618558  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:57.618563  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:57.618626  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:57.645917  488914 cri.go:89] found id: ""
	I1202 21:46:57.645931  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.645938  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:57.645943  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:57.646003  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:57.673325  488914 cri.go:89] found id: ""
	I1202 21:46:57.673338  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.673353  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:57.673362  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:57.673378  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:57.748284  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:57.748294  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:57.748305  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:57.828296  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:57.828314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:57.855830  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:57.855846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.921121  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:57.921140  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:00.436836  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:00.448366  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:00.448436  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:00.478939  488914 cri.go:89] found id: ""
	I1202 21:47:00.478953  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.478960  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:00.478969  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:00.479059  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:00.505959  488914 cri.go:89] found id: ""
	I1202 21:47:00.505974  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.505981  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:00.505986  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:00.506050  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:00.532568  488914 cri.go:89] found id: ""
	I1202 21:47:00.532584  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.532597  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:00.532602  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:00.532667  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:00.561666  488914 cri.go:89] found id: ""
	I1202 21:47:00.561680  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.561687  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:00.561692  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:00.561753  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:00.588051  488914 cri.go:89] found id: ""
	I1202 21:47:00.588065  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.588072  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:00.588078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:00.588139  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:00.612422  488914 cri.go:89] found id: ""
	I1202 21:47:00.612437  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.612443  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:00.612449  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:00.612513  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:00.642069  488914 cri.go:89] found id: ""
	I1202 21:47:00.642082  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.642089  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:00.642097  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:00.642108  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:00.727511  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:00.727520  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:00.727531  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:00.803650  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:00.803671  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:00.832608  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:00.832624  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:00.900692  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:00.900713  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.417333  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:03.427135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:03.427205  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:03.451551  488914 cri.go:89] found id: ""
	I1202 21:47:03.451566  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.451573  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:03.451578  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:03.451635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:03.476736  488914 cri.go:89] found id: ""
	I1202 21:47:03.476750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.476757  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:03.476763  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:03.476825  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:03.501736  488914 cri.go:89] found id: ""
	I1202 21:47:03.501750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.501756  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:03.501761  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:03.501820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:03.527339  488914 cri.go:89] found id: ""
	I1202 21:47:03.527353  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.527360  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:03.527365  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:03.527427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:03.552910  488914 cri.go:89] found id: ""
	I1202 21:47:03.552923  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.552930  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:03.552936  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:03.552994  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:03.578110  488914 cri.go:89] found id: ""
	I1202 21:47:03.578124  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.578130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:03.578135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:03.578194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:03.603194  488914 cri.go:89] found id: ""
	I1202 21:47:03.603208  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.603215  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:03.603223  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:03.603233  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:03.688154  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:03.688174  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:03.725392  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:03.725408  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:03.791852  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:03.791873  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.807065  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:03.807080  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:03.882666  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.384350  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:06.394676  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:06.394749  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:06.423508  488914 cri.go:89] found id: ""
	I1202 21:47:06.423523  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.423530  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:06.423536  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:06.423595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:06.449675  488914 cri.go:89] found id: ""
	I1202 21:47:06.449689  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.449696  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:06.449701  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:06.449762  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:06.480053  488914 cri.go:89] found id: ""
	I1202 21:47:06.480066  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.480073  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:06.480078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:06.480140  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:06.508415  488914 cri.go:89] found id: ""
	I1202 21:47:06.508428  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.508435  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:06.508440  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:06.508498  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:06.533743  488914 cri.go:89] found id: ""
	I1202 21:47:06.533756  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.533763  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:06.533776  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:06.533836  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:06.558457  488914 cri.go:89] found id: ""
	I1202 21:47:06.558472  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.558479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:06.558484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:06.558548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:06.585312  488914 cri.go:89] found id: ""
	I1202 21:47:06.585326  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.585333  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:06.585341  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:06.585352  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:06.600648  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:06.600665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:06.677036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.677046  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:06.677058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:06.757223  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:06.757244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:06.785439  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:06.785455  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.357941  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:09.369144  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:09.369207  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:09.398056  488914 cri.go:89] found id: ""
	I1202 21:47:09.398070  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.398077  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:09.398083  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:09.398143  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:09.424606  488914 cri.go:89] found id: ""
	I1202 21:47:09.424620  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.424628  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:09.424633  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:09.424694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:09.451520  488914 cri.go:89] found id: ""
	I1202 21:47:09.451535  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.451542  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:09.451547  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:09.451607  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:09.477315  488914 cri.go:89] found id: ""
	I1202 21:47:09.477330  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.477337  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:09.477344  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:09.477399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:09.503654  488914 cri.go:89] found id: ""
	I1202 21:47:09.503668  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.503675  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:09.503680  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:09.503750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:09.529545  488914 cri.go:89] found id: ""
	I1202 21:47:09.529558  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.529565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:09.529571  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:09.529629  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:09.554726  488914 cri.go:89] found id: ""
	I1202 21:47:09.554740  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.554747  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:09.554754  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:09.554767  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.620273  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:09.620293  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:09.635655  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:09.635672  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:09.720524  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:09.720534  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:09.720544  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:09.800379  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:09.800400  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:12.331221  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:12.341899  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:12.341957  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:12.369642  488914 cri.go:89] found id: ""
	I1202 21:47:12.369656  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.369663  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:12.369668  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:12.369729  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:12.395917  488914 cri.go:89] found id: ""
	I1202 21:47:12.395930  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.395938  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:12.395943  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:12.396015  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:12.422817  488914 cri.go:89] found id: ""
	I1202 21:47:12.422831  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.422838  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:12.422843  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:12.422903  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:12.451973  488914 cri.go:89] found id: ""
	I1202 21:47:12.451986  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.451993  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:12.451998  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:12.452057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:12.477543  488914 cri.go:89] found id: ""
	I1202 21:47:12.477557  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.477564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:12.477569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:12.477627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:12.504941  488914 cri.go:89] found id: ""
	I1202 21:47:12.504954  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.504961  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:12.504967  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:12.505025  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:12.530800  488914 cri.go:89] found id: ""
	I1202 21:47:12.530821  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.530828  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:12.530836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:12.530846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:12.596910  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:12.596929  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:12.612316  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:12.612333  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:12.684014  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:12.684025  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:12.684039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:12.771749  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:12.771771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:15.304325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:15.315385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:15.315451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:15.341411  488914 cri.go:89] found id: ""
	I1202 21:47:15.341427  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.341434  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:15.341439  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:15.341501  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:15.366798  488914 cri.go:89] found id: ""
	I1202 21:47:15.366811  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.366818  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:15.366824  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:15.366884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:15.391138  488914 cri.go:89] found id: ""
	I1202 21:47:15.391152  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.391159  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:15.391164  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:15.391226  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:15.415514  488914 cri.go:89] found id: ""
	I1202 21:47:15.415528  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.415535  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:15.415540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:15.415595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:15.440750  488914 cri.go:89] found id: ""
	I1202 21:47:15.440764  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.440771  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:15.440777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:15.440839  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:15.469806  488914 cri.go:89] found id: ""
	I1202 21:47:15.469820  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.469827  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:15.469833  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:15.469891  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:15.497648  488914 cri.go:89] found id: ""
	I1202 21:47:15.497661  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.497668  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:15.497675  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:15.497687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:15.567654  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:15.567679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:15.582770  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:15.582785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:15.647132  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:15.647143  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:15.647154  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:15.740463  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:15.740492  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.270232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:18.280720  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:18.280782  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:18.305710  488914 cri.go:89] found id: ""
	I1202 21:47:18.305724  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.305731  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:18.305736  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:18.305793  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:18.329526  488914 cri.go:89] found id: ""
	I1202 21:47:18.329539  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.329545  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:18.329550  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:18.329606  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:18.355166  488914 cri.go:89] found id: ""
	I1202 21:47:18.355195  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.355202  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:18.355207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:18.355275  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:18.381992  488914 cri.go:89] found id: ""
	I1202 21:47:18.382006  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.382013  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:18.382018  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:18.382080  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:18.410268  488914 cri.go:89] found id: ""
	I1202 21:47:18.410283  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.410290  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:18.410296  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:18.410354  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:18.434607  488914 cri.go:89] found id: ""
	I1202 21:47:18.434620  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.434627  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:18.434632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:18.434689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:18.460092  488914 cri.go:89] found id: ""
	I1202 21:47:18.460106  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.460112  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:18.460120  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:18.460130  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:18.525571  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:18.525580  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:18.525591  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:18.601752  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:18.601776  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.631242  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:18.631258  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:18.706458  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:18.706478  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.222232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:21.232120  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:21.232178  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:21.257057  488914 cri.go:89] found id: ""
	I1202 21:47:21.257071  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.257078  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:21.257089  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:21.257145  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:21.281739  488914 cri.go:89] found id: ""
	I1202 21:47:21.281752  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.281759  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:21.281764  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:21.281820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:21.306878  488914 cri.go:89] found id: ""
	I1202 21:47:21.306892  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.306899  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:21.306905  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:21.306959  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:21.332327  488914 cri.go:89] found id: ""
	I1202 21:47:21.332340  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.332347  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:21.332352  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:21.332408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:21.356717  488914 cri.go:89] found id: ""
	I1202 21:47:21.356730  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.356737  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:21.356742  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:21.356799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:21.380787  488914 cri.go:89] found id: ""
	I1202 21:47:21.380801  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.380807  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:21.380813  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:21.380867  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:21.405984  488914 cri.go:89] found id: ""
	I1202 21:47:21.405998  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.406005  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:21.406013  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:21.406023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:21.438420  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:21.438435  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:21.503149  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:21.503170  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.518755  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:21.518771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:21.584415  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:21.584425  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:21.584437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.161915  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:24.172338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:24.172401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:24.197081  488914 cri.go:89] found id: ""
	I1202 21:47:24.197095  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.197102  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:24.197108  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:24.197166  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:24.222792  488914 cri.go:89] found id: ""
	I1202 21:47:24.222806  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.222827  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:24.222833  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:24.222898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:24.248463  488914 cri.go:89] found id: ""
	I1202 21:47:24.248486  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.248495  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:24.248500  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:24.248561  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:24.282539  488914 cri.go:89] found id: ""
	I1202 21:47:24.282554  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.282561  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:24.282567  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:24.282636  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:24.308071  488914 cri.go:89] found id: ""
	I1202 21:47:24.308086  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.308093  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:24.308098  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:24.308165  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:24.333666  488914 cri.go:89] found id: ""
	I1202 21:47:24.333689  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.333696  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:24.333702  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:24.333769  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:24.363212  488914 cri.go:89] found id: ""
	I1202 21:47:24.363226  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.363233  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:24.363254  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:24.363265  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:24.428642  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:24.428664  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:24.444347  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:24.444363  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:24.510036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:24.510047  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:24.510058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.585705  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:24.585726  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:27.116827  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:27.127233  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:27.127299  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:27.156311  488914 cri.go:89] found id: ""
	I1202 21:47:27.156325  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.156332  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:27.156337  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:27.156401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:27.180597  488914 cri.go:89] found id: ""
	I1202 21:47:27.180611  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.180617  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:27.180623  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:27.180682  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:27.205333  488914 cri.go:89] found id: ""
	I1202 21:47:27.205347  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.205354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:27.205359  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:27.205417  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:27.231165  488914 cri.go:89] found id: ""
	I1202 21:47:27.231179  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.231186  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:27.231192  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:27.231251  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:27.260640  488914 cri.go:89] found id: ""
	I1202 21:47:27.260654  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.260662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:27.260667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:27.260732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:27.286552  488914 cri.go:89] found id: ""
	I1202 21:47:27.286566  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.286573  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:27.286578  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:27.286637  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:27.311590  488914 cri.go:89] found id: ""
	I1202 21:47:27.311604  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.311611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:27.311619  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:27.311630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:27.376291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:27.376311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:27.391299  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:27.391314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:27.452046  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:27.452056  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:27.452067  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:27.527099  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:27.527119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.055495  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:30.067197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:30.067272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:30.093385  488914 cri.go:89] found id: ""
	I1202 21:47:30.093400  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.093407  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:30.093413  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:30.093475  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:30.120468  488914 cri.go:89] found id: ""
	I1202 21:47:30.120482  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.120490  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:30.120495  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:30.120558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:30.147744  488914 cri.go:89] found id: ""
	I1202 21:47:30.147759  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.147767  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:30.147772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:30.147838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:30.173628  488914 cri.go:89] found id: ""
	I1202 21:47:30.173650  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.173658  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:30.173664  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:30.173742  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:30.201952  488914 cri.go:89] found id: ""
	I1202 21:47:30.201992  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.202001  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:30.202007  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:30.202075  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:30.228366  488914 cri.go:89] found id: ""
	I1202 21:47:30.228380  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.228387  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:30.228399  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:30.228468  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:30.254412  488914 cri.go:89] found id: ""
	I1202 21:47:30.254426  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.254434  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:30.254442  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:30.254453  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:30.330454  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:30.330474  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.364243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:30.364259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:30.429823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:30.429841  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:30.445036  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:30.445058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:30.506029  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.006821  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:33.017853  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:33.017924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:33.043314  488914 cri.go:89] found id: ""
	I1202 21:47:33.043328  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.043335  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:33.043343  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:33.043402  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:33.068806  488914 cri.go:89] found id: ""
	I1202 21:47:33.068820  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.068826  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:33.068831  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:33.068889  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:33.097822  488914 cri.go:89] found id: ""
	I1202 21:47:33.097835  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.097842  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:33.097847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:33.097905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:33.123154  488914 cri.go:89] found id: ""
	I1202 21:47:33.123168  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.123176  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:33.123181  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:33.123240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:33.148284  488914 cri.go:89] found id: ""
	I1202 21:47:33.148298  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.148305  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:33.148310  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:33.148369  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:33.173434  488914 cri.go:89] found id: ""
	I1202 21:47:33.173448  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.173454  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:33.173460  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:33.173519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:33.198619  488914 cri.go:89] found id: ""
	I1202 21:47:33.198633  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.198640  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:33.198647  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:33.198662  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:33.263426  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:33.263446  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:33.279026  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:33.279042  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:33.339351  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.339361  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:33.339372  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:33.418569  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:33.418588  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:35.951124  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:35.962387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:35.962491  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:35.989088  488914 cri.go:89] found id: ""
	I1202 21:47:35.989102  488914 logs.go:282] 0 containers: []
	W1202 21:47:35.989109  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:35.989115  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:35.989176  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:36.017461  488914 cri.go:89] found id: ""
	I1202 21:47:36.017477  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.017484  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:36.017490  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:36.017614  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:36.046790  488914 cri.go:89] found id: ""
	I1202 21:47:36.046805  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.046812  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:36.046817  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:36.046875  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:36.073683  488914 cri.go:89] found id: ""
	I1202 21:47:36.073697  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.073704  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:36.073710  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:36.073767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:36.101900  488914 cri.go:89] found id: ""
	I1202 21:47:36.101914  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.101921  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:36.101926  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:36.101985  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:36.130435  488914 cri.go:89] found id: ""
	I1202 21:47:36.130449  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.130456  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:36.130462  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:36.130524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:36.157134  488914 cri.go:89] found id: ""
	I1202 21:47:36.157148  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.157155  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:36.157163  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:36.157173  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:36.221900  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:36.221919  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:36.237051  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:36.237068  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:36.299876  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:36.299886  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:36.299910  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:36.374213  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:36.374232  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:38.902545  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:38.913357  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:38.913415  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:38.944543  488914 cri.go:89] found id: ""
	I1202 21:47:38.944557  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.944563  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:38.944569  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:38.944627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:38.975916  488914 cri.go:89] found id: ""
	I1202 21:47:38.975930  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.975937  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:38.975942  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:38.976001  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:39.009795  488914 cri.go:89] found id: ""
	I1202 21:47:39.009810  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.009817  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:39.009823  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:39.009886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:39.034688  488914 cri.go:89] found id: ""
	I1202 21:47:39.034718  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.034726  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:39.034732  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:39.034805  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:39.059667  488914 cri.go:89] found id: ""
	I1202 21:47:39.059693  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.059701  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:39.059706  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:39.059767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:39.085837  488914 cri.go:89] found id: ""
	I1202 21:47:39.085851  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.085868  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:39.085873  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:39.085941  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:39.111280  488914 cri.go:89] found id: ""
	I1202 21:47:39.111295  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.111302  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:39.111310  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:39.111320  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:39.175646  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:39.175668  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:39.190971  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:39.190987  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:39.258563  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:39.258573  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:39.258584  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:39.333779  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:39.333798  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:41.863817  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:41.873822  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:41.873882  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:41.899560  488914 cri.go:89] found id: ""
	I1202 21:47:41.899585  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.899592  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:41.899598  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:41.899663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:41.937866  488914 cri.go:89] found id: ""
	I1202 21:47:41.937880  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.937887  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:41.937892  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:41.937960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:41.971862  488914 cri.go:89] found id: ""
	I1202 21:47:41.971876  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.971901  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:41.971907  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:41.971975  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:42.010639  488914 cri.go:89] found id: ""
	I1202 21:47:42.010655  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.010663  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:42.010695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:42.010778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:42.040775  488914 cri.go:89] found id: ""
	I1202 21:47:42.040790  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.040800  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:42.040805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:42.040881  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:42.072124  488914 cri.go:89] found id: ""
	I1202 21:47:42.072139  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.072149  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:42.072175  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:42.072252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:42.105424  488914 cri.go:89] found id: ""
	I1202 21:47:42.105439  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.105447  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:42.105456  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:42.105467  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:42.175007  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:42.175032  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:42.194759  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:42.194785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:42.271235  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:42.271247  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:42.271260  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:42.360263  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:42.360296  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:44.892475  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:44.902425  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:44.902484  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:44.929930  488914 cri.go:89] found id: ""
	I1202 21:47:44.929944  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.929952  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:44.929957  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:44.930017  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:44.959205  488914 cri.go:89] found id: ""
	I1202 21:47:44.959219  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.959225  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:44.959231  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:44.959288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:44.991335  488914 cri.go:89] found id: ""
	I1202 21:47:44.991350  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.991357  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:44.991362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:44.991437  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:45.047326  488914 cri.go:89] found id: ""
	I1202 21:47:45.047342  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.047350  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:45.047358  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:45.047440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:45.110770  488914 cri.go:89] found id: ""
	I1202 21:47:45.110787  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.110796  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:45.110803  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:45.110872  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:45.147274  488914 cri.go:89] found id: ""
	I1202 21:47:45.147290  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.147298  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:45.147304  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:45.147372  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:45.230398  488914 cri.go:89] found id: ""
	I1202 21:47:45.230413  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.230421  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:45.230437  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:45.230457  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:45.315457  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:45.315469  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:45.315479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:45.391401  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:45.391421  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:45.422183  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:45.422200  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:45.491250  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:45.491269  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.007522  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:48.019509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:48.019579  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:48.047045  488914 cri.go:89] found id: ""
	I1202 21:47:48.047059  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.047066  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:48.047072  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:48.047133  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:48.073355  488914 cri.go:89] found id: ""
	I1202 21:47:48.073370  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.073377  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:48.073383  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:48.073443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:48.101623  488914 cri.go:89] found id: ""
	I1202 21:47:48.101640  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.101653  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:48.101658  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:48.101728  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:48.128708  488914 cri.go:89] found id: ""
	I1202 21:47:48.128722  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.128729  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:48.128734  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:48.128795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:48.154337  488914 cri.go:89] found id: ""
	I1202 21:47:48.154352  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.154359  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:48.154364  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:48.154426  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:48.181724  488914 cri.go:89] found id: ""
	I1202 21:47:48.181739  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.181746  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:48.181752  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:48.181810  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:48.207628  488914 cri.go:89] found id: ""
	I1202 21:47:48.207641  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.207648  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:48.207655  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:48.207665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:48.273678  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:48.273699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.289393  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:48.289410  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:48.353116  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:48.353126  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:48.353138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:48.429785  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:48.429809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:50.961028  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:50.971337  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:50.971408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:51.004925  488914 cri.go:89] found id: ""
	I1202 21:47:51.004941  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.004949  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:51.004956  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:51.005023  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:51.033852  488914 cri.go:89] found id: ""
	I1202 21:47:51.033866  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.033873  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:51.033879  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:51.033951  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:51.065370  488914 cri.go:89] found id: ""
	I1202 21:47:51.065384  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.065392  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:51.065397  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:51.065454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:51.091797  488914 cri.go:89] found id: ""
	I1202 21:47:51.091811  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.091819  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:51.091824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:51.091886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:51.118245  488914 cri.go:89] found id: ""
	I1202 21:47:51.118260  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.118267  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:51.118273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:51.118350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:51.144813  488914 cri.go:89] found id: ""
	I1202 21:47:51.144828  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.144835  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:51.144841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:51.144898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:51.170591  488914 cri.go:89] found id: ""
	I1202 21:47:51.170605  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.170622  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:51.170630  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:51.170641  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:51.201061  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:51.201078  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:51.268903  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:51.268922  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:51.286516  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:51.286532  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:51.360635  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:51.360647  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:51.360658  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:53.937801  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:53.951326  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:53.951403  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:53.981411  488914 cri.go:89] found id: ""
	I1202 21:47:53.981424  488914 logs.go:282] 0 containers: []
	W1202 21:47:53.981431  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:53.981444  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:53.981504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:54.019553  488914 cri.go:89] found id: ""
	I1202 21:47:54.019568  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.019576  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:54.019581  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:54.019641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:54.045870  488914 cri.go:89] found id: ""
	I1202 21:47:54.045884  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.045891  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:54.045896  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:54.045960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:54.072428  488914 cri.go:89] found id: ""
	I1202 21:47:54.072443  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.072450  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:54.072455  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:54.072519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:54.098413  488914 cri.go:89] found id: ""
	I1202 21:47:54.098427  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.098434  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:54.098439  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:54.098497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:54.124502  488914 cri.go:89] found id: ""
	I1202 21:47:54.124517  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.124524  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:54.124529  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:54.124589  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:54.151244  488914 cri.go:89] found id: ""
	I1202 21:47:54.151258  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.151265  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:54.151273  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:54.151284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:54.213677  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:54.213688  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:54.213700  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:54.289814  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:54.289835  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:54.319415  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:54.319432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:54.385725  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:54.385745  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:56.902920  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:56.915363  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:56.915439  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:56.942569  488914 cri.go:89] found id: ""
	I1202 21:47:56.942583  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.942590  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:56.942596  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:56.942655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:56.975362  488914 cri.go:89] found id: ""
	I1202 21:47:56.975384  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.975391  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:56.975397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:56.975456  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:57.006861  488914 cri.go:89] found id: ""
	I1202 21:47:57.006877  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.006884  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:57.006890  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:57.006958  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:57.033667  488914 cri.go:89] found id: ""
	I1202 21:47:57.033682  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.033689  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:57.033695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:57.033751  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:57.059458  488914 cri.go:89] found id: ""
	I1202 21:47:57.059472  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.059479  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:57.059484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:57.059544  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:57.086098  488914 cri.go:89] found id: ""
	I1202 21:47:57.086112  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.086130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:57.086136  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:57.086206  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:57.112732  488914 cri.go:89] found id: ""
	I1202 21:47:57.112747  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.112754  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:57.112762  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:57.112773  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:57.141211  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:57.141226  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:57.210823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:57.210842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:57.226149  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:57.226166  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:57.287720  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:57.287730  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:57.287742  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:59.865507  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:59.875824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:59.875886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:59.901721  488914 cri.go:89] found id: ""
	I1202 21:47:59.901735  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.901741  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:59.901747  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:59.901834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:59.938763  488914 cri.go:89] found id: ""
	I1202 21:47:59.938777  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.938784  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:59.938789  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:59.938844  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:59.968613  488914 cri.go:89] found id: ""
	I1202 21:47:59.968627  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.968634  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:59.968639  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:59.968696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:00.011145  488914 cri.go:89] found id: ""
	I1202 21:48:00.011162  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.011172  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:00.011179  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:00.011248  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:00.128636  488914 cri.go:89] found id: ""
	I1202 21:48:00.128653  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.128662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:00.128668  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:00.128743  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:00.191602  488914 cri.go:89] found id: ""
	I1202 21:48:00.191633  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.191642  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:00.191651  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:00.191735  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:00.286597  488914 cri.go:89] found id: ""
	I1202 21:48:00.286618  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.286626  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:00.286635  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:00.286657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:00.393972  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:00.394009  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:00.425438  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:00.425462  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:00.522799  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:00.522810  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:00.522822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:00.603332  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:00.603356  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.142041  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:03.152666  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:03.152730  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:03.179575  488914 cri.go:89] found id: ""
	I1202 21:48:03.179589  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.179596  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:03.179601  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:03.179666  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:03.208278  488914 cri.go:89] found id: ""
	I1202 21:48:03.208293  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.208300  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:03.208305  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:03.208365  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:03.237068  488914 cri.go:89] found id: ""
	I1202 21:48:03.237081  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.237088  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:03.237093  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:03.237150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:03.262185  488914 cri.go:89] found id: ""
	I1202 21:48:03.262199  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.262206  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:03.262212  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:03.262270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:03.287056  488914 cri.go:89] found id: ""
	I1202 21:48:03.287076  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.287082  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:03.287088  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:03.287150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:03.312745  488914 cri.go:89] found id: ""
	I1202 21:48:03.312759  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.312766  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:03.312774  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:03.312831  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:03.337493  488914 cri.go:89] found id: ""
	I1202 21:48:03.337507  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.337514  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:03.337522  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:03.337535  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:03.398946  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:03.398957  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:03.398969  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:03.475063  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:03.475083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.502836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:03.502852  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:03.569966  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:03.569985  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.085423  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:06.096220  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:06.096284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:06.124362  488914 cri.go:89] found id: ""
	I1202 21:48:06.124378  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.124384  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:06.124392  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:06.124451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:06.150807  488914 cri.go:89] found id: ""
	I1202 21:48:06.150822  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.150829  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:06.150835  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:06.150896  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:06.177096  488914 cri.go:89] found id: ""
	I1202 21:48:06.177110  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.177117  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:06.177122  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:06.177189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:06.202670  488914 cri.go:89] found id: ""
	I1202 21:48:06.202684  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.202691  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:06.202697  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:06.202760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:06.227599  488914 cri.go:89] found id: ""
	I1202 21:48:06.227614  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.227626  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:06.227632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:06.227692  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:06.252361  488914 cri.go:89] found id: ""
	I1202 21:48:06.252375  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.252381  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:06.252387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:06.252443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:06.278301  488914 cri.go:89] found id: ""
	I1202 21:48:06.278315  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.278323  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:06.278331  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:06.278341  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:06.344608  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:06.344629  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.359909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:06.359925  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:06.427972  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:06.427982  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:06.427993  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:06.503390  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:06.503409  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.032284  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:09.043491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:09.043554  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:09.073343  488914 cri.go:89] found id: ""
	I1202 21:48:09.073358  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.073365  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:09.073371  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:09.073438  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:09.106311  488914 cri.go:89] found id: ""
	I1202 21:48:09.106325  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.106332  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:09.106337  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:09.106400  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:09.137607  488914 cri.go:89] found id: ""
	I1202 21:48:09.137622  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.137630  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:09.137635  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:09.137696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:09.165465  488914 cri.go:89] found id: ""
	I1202 21:48:09.165479  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.165486  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:09.165491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:09.165553  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:09.191695  488914 cri.go:89] found id: ""
	I1202 21:48:09.191709  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.191715  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:09.191721  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:09.191778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:09.217199  488914 cri.go:89] found id: ""
	I1202 21:48:09.217213  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.217221  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:09.217227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:09.217284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:09.243947  488914 cri.go:89] found id: ""
	I1202 21:48:09.243961  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.243977  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:09.243985  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:09.243995  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:09.259022  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:09.259038  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:09.325462  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:09.325472  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:09.325483  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:09.404565  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:09.404586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.435844  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:09.435860  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:12.005527  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:12.017298  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:12.017364  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:12.043631  488914 cri.go:89] found id: ""
	I1202 21:48:12.043645  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.043652  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:12.043657  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:12.043717  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:12.072548  488914 cri.go:89] found id: ""
	I1202 21:48:12.072562  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.072569  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:12.072574  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:12.072634  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:12.097779  488914 cri.go:89] found id: ""
	I1202 21:48:12.097792  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.097799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:12.097806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:12.097861  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:12.122380  488914 cri.go:89] found id: ""
	I1202 21:48:12.122394  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.122400  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:12.122406  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:12.122462  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:12.147485  488914 cri.go:89] found id: ""
	I1202 21:48:12.147499  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.147506  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:12.147511  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:12.147569  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:12.172352  488914 cri.go:89] found id: ""
	I1202 21:48:12.172372  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.172379  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:12.172385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:12.172451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:12.197386  488914 cri.go:89] found id: ""
	I1202 21:48:12.197400  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.197406  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:12.197414  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:12.197425  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:12.212275  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:12.212291  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:12.283599  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:12.283609  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:12.283620  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:12.362146  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:12.362177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:12.394426  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:12.394452  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:14.959300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:14.969317  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:14.969378  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:14.995679  488914 cri.go:89] found id: ""
	I1202 21:48:14.995693  488914 logs.go:282] 0 containers: []
	W1202 21:48:14.995701  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:14.995706  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:14.995767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:15.039291  488914 cri.go:89] found id: ""
	I1202 21:48:15.039307  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.039316  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:15.039322  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:15.039440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:15.066778  488914 cri.go:89] found id: ""
	I1202 21:48:15.066793  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.066800  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:15.066806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:15.066866  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:15.096009  488914 cri.go:89] found id: ""
	I1202 21:48:15.096031  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.096039  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:15.096045  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:15.096109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:15.124965  488914 cri.go:89] found id: ""
	I1202 21:48:15.124980  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.124987  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:15.124992  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:15.125055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:15.151140  488914 cri.go:89] found id: ""
	I1202 21:48:15.151155  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.151162  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:15.151168  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:15.151225  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:15.180343  488914 cri.go:89] found id: ""
	I1202 21:48:15.180362  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.180369  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:15.180378  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:15.180389  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:15.245885  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:15.245905  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:15.261189  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:15.261204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:15.329096  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:15.329106  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:15.329119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:15.404768  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:15.404789  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:17.936657  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:17.948615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:17.948678  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:17.980274  488914 cri.go:89] found id: ""
	I1202 21:48:17.980288  488914 logs.go:282] 0 containers: []
	W1202 21:48:17.980295  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:17.980301  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:17.980358  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:18.009972  488914 cri.go:89] found id: ""
	I1202 21:48:18.009988  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.009995  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:18.010000  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:18.010068  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:18.037292  488914 cri.go:89] found id: ""
	I1202 21:48:18.037307  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.037314  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:18.037320  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:18.037389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:18.068010  488914 cri.go:89] found id: ""
	I1202 21:48:18.068025  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.068034  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:18.068039  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:18.068100  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:18.098519  488914 cri.go:89] found id: ""
	I1202 21:48:18.098537  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.098545  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:18.098552  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:18.098616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:18.125321  488914 cri.go:89] found id: ""
	I1202 21:48:18.125336  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.125343  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:18.125349  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:18.125408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:18.154110  488914 cri.go:89] found id: ""
	I1202 21:48:18.154124  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.154131  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:18.154139  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:18.154161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:18.186862  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:18.186879  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:18.252168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:18.252188  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:18.267297  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:18.267312  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:18.330969  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:18.330979  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:18.330989  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:20.906864  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:20.918719  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:20.918779  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:20.946664  488914 cri.go:89] found id: ""
	I1202 21:48:20.946681  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.946688  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:20.946694  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:20.946757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:20.973074  488914 cri.go:89] found id: ""
	I1202 21:48:20.973088  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.973095  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:20.973100  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:20.973160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:20.998478  488914 cri.go:89] found id: ""
	I1202 21:48:20.998495  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.998503  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:20.998509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:20.998582  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:21.033676  488914 cri.go:89] found id: ""
	I1202 21:48:21.033691  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.033708  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:21.033714  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:21.033773  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:21.059527  488914 cri.go:89] found id: ""
	I1202 21:48:21.059549  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.059557  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:21.059562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:21.059623  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:21.088534  488914 cri.go:89] found id: ""
	I1202 21:48:21.088548  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.088555  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:21.088562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:21.088618  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:21.114102  488914 cri.go:89] found id: ""
	I1202 21:48:21.114116  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.114123  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:21.114130  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:21.114141  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:21.176428  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:21.176438  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:21.176449  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:21.251600  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:21.251621  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:21.278584  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:21.278600  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:21.350258  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:21.350279  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:23.865709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:23.876050  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:23.876119  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:23.906000  488914 cri.go:89] found id: ""
	I1202 21:48:23.906014  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.906021  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:23.906027  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:23.906094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:23.934001  488914 cri.go:89] found id: ""
	I1202 21:48:23.934015  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.934022  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:23.934028  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:23.934088  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:23.969619  488914 cri.go:89] found id: ""
	I1202 21:48:23.969633  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.969640  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:23.969645  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:23.969710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:23.997123  488914 cri.go:89] found id: ""
	I1202 21:48:23.997137  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.997144  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:23.997149  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:23.997211  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:24.027561  488914 cri.go:89] found id: ""
	I1202 21:48:24.027576  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.027584  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:24.027590  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:24.027660  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:24.053543  488914 cri.go:89] found id: ""
	I1202 21:48:24.053558  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.053565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:24.053570  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:24.053641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:24.080080  488914 cri.go:89] found id: ""
	I1202 21:48:24.080094  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.080101  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:24.080109  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:24.080119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:24.147092  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:24.147112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:24.162650  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:24.162666  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:24.225019  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:24.225029  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:24.225039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:24.300286  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:24.300307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:26.831634  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:26.843079  488914 kubeadm.go:602] duration metric: took 4m3.730369294s to restartPrimaryControlPlane
	W1202 21:48:26.843152  488914 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:48:26.843233  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:48:27.259211  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:48:27.272350  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:48:27.280460  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:48:27.280517  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:48:27.288570  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:48:27.288578  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:48:27.288628  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:48:27.296654  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:48:27.296709  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:48:27.304086  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:48:27.311898  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:48:27.311953  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:48:27.319289  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.326825  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:48:27.326888  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.334620  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:48:27.342084  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:48:27.342139  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:48:27.349467  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:48:27.386582  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:48:27.386896  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:48:27.472364  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:48:27.472439  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:48:27.472489  488914 kubeadm.go:319] OS: Linux
	I1202 21:48:27.472545  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:48:27.472601  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:48:27.472644  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:48:27.472700  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:48:27.472753  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:48:27.472804  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:48:27.472859  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:48:27.472915  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:48:27.472973  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:48:27.543309  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:48:27.543431  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:48:27.543527  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:48:27.554036  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:48:27.559373  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:48:27.559468  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:48:27.559542  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:48:27.559629  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:48:27.559701  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:48:27.559787  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:48:27.559841  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:48:27.559915  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:48:27.559985  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:48:27.560076  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:48:27.560159  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:48:27.560210  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:48:27.560269  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:48:27.850282  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:48:28.505037  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:48:28.762985  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:48:28.951263  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:48:29.183372  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:48:29.184043  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:48:29.186561  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:48:29.189676  488914 out.go:252]   - Booting up control plane ...
	I1202 21:48:29.189765  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:48:29.189838  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:48:29.191619  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:48:29.207350  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:48:29.207778  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:48:29.215590  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:48:29.215853  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:48:29.216063  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:48:29.353309  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:48:29.353417  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:52:29.354218  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001230264s
	I1202 21:52:29.354245  488914 kubeadm.go:319] 
	I1202 21:52:29.354298  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:52:29.354329  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:52:29.354427  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:52:29.354432  488914 kubeadm.go:319] 
	I1202 21:52:29.354529  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:52:29.354559  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:52:29.354587  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:52:29.354590  488914 kubeadm.go:319] 
	I1202 21:52:29.358907  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:52:29.359370  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:52:29.359489  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:52:29.359719  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:52:29.359724  488914 kubeadm.go:319] 
	I1202 21:52:29.359816  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:52:29.359952  488914 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230264s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:52:29.360041  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:52:29.774288  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:52:29.786781  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:52:29.786832  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:52:29.794551  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:52:29.794562  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:52:29.794615  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:52:29.802140  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:52:29.802200  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:52:29.809778  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:52:29.817315  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:52:29.817375  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:52:29.824944  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.832581  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:52:29.832636  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.840105  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:52:29.848039  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:52:29.848102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:52:29.855571  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:52:29.895459  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:52:29.895508  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:52:29.966851  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:52:29.966918  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:52:29.966952  488914 kubeadm.go:319] OS: Linux
	I1202 21:52:29.967027  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:52:29.967074  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:52:29.967120  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:52:29.967166  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:52:29.967212  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:52:29.967259  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:52:29.967302  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:52:29.967348  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:52:29.967393  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:52:30.044273  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:52:30.044406  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:52:30.044512  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:52:30.059289  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:52:30.064606  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:52:30.064707  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:52:30.064778  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:52:30.064861  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:52:30.064927  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:52:30.065002  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:52:30.065061  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:52:30.065130  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:52:30.065197  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:52:30.065280  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:52:30.065358  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:52:30.065394  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:52:30.065457  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:52:30.391272  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:52:30.580061  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:52:30.892953  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:52:31.052311  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:52:31.356833  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:52:31.357398  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:52:31.360444  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:52:31.363666  488914 out.go:252]   - Booting up control plane ...
	I1202 21:52:31.363767  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:52:31.363843  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:52:31.364787  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:52:31.380952  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:52:31.381067  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:52:31.389182  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:52:31.389514  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:52:31.389769  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:52:31.510935  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:52:31.511077  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:56:31.511610  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001043188s
	I1202 21:56:31.511635  488914 kubeadm.go:319] 
	I1202 21:56:31.511691  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:56:31.511724  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:56:31.511828  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:56:31.511833  488914 kubeadm.go:319] 
	I1202 21:56:31.511936  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:56:31.511966  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:56:31.511996  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:56:31.511999  488914 kubeadm.go:319] 
	I1202 21:56:31.516147  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:56:31.516591  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:56:31.516707  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:56:31.516982  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:56:31.516989  488914 kubeadm.go:319] 
	I1202 21:56:31.517086  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:56:31.517154  488914 kubeadm.go:403] duration metric: took 12m8.4399317s to StartCluster
	I1202 21:56:31.517186  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:56:31.517279  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:56:31.545508  488914 cri.go:89] found id: ""
	I1202 21:56:31.545521  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.545528  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:56:31.545538  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:56:31.545593  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:56:31.573505  488914 cri.go:89] found id: ""
	I1202 21:56:31.573519  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.573526  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:56:31.573532  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:56:31.573594  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:56:31.598620  488914 cri.go:89] found id: ""
	I1202 21:56:31.598634  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.598642  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:56:31.598647  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:56:31.598718  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:56:31.624500  488914 cri.go:89] found id: ""
	I1202 21:56:31.624514  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.624522  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:56:31.624528  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:56:31.624590  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:56:31.650576  488914 cri.go:89] found id: ""
	I1202 21:56:31.650591  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.650598  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:56:31.650604  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:56:31.650665  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:56:31.677681  488914 cri.go:89] found id: ""
	I1202 21:56:31.677696  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.677703  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:56:31.677709  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:56:31.677772  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:56:31.702889  488914 cri.go:89] found id: ""
	I1202 21:56:31.702903  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.702910  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:56:31.702918  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:56:31.702928  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:56:31.769428  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:56:31.769447  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:56:31.784680  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:56:31.784696  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:56:31.848558  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:56:31.848570  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:56:31.848581  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:56:31.924323  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:56:31.924343  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:56:31.952600  488914 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:56:31.952640  488914 out.go:285] * 
	W1202 21:56:31.952744  488914 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.952799  488914 out.go:285] * 
	W1202 21:56:31.955203  488914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:56:31.960375  488914 out.go:203] 
	W1202 21:56:31.963105  488914 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.963144  488914 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:56:31.963163  488914 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:56:31.966130  488914 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.319873349Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.319910075Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.319954892Z" level=info msg="Create NRI interface"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320093511Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320107279Z" level=info msg="runtime interface created"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320122122Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320128136Z" level=info msg="runtime interface starting up..."
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320134996Z" level=info msg="starting plugins..."
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320149281Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:44:21 functional-066896 crio[10511]: time="2025-12-02T21:44:21.320216843Z" level=info msg="No systemd watchdog enabled"
	Dec 02 21:44:21 functional-066896 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.546712318Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c5da9591-660f-4540-8512-c986d215b6ce name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.547792951Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b20776b9-be53-485c-9f3f-546c9d76585b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.551358438Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=f47f53c0-c041-4cd6-b337-b6da20818107 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.551918566Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=288944b7-03e2-4e35-a724-f6224d5602e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.552447137Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=86197a73-b72f-4206-9f23-0ccc39ed5484 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.552829015Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=782c6892-90a1-4091-890f-06b9d64d90fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:48:27 functional-066896 crio[10511]: time="2025-12-02T21:48:27.553189609Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=83b463c2-49c3-44a9-847b-496ac7b6cf23 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.050953685Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b9fc70f0-4149-490c-a9d6-8566800da526 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.054549369Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=400c0ed4-9f98-4d43-b1d5-914d2118a5d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.055394817Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=589e6b72-3a61-4459-a0b4-17ed249e317e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.056191886Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e9ce8021-4d34-47a9-aae1-148ec62cef62 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.056880533Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b08f0833-8ddd-4fa8-bbfc-c47b34e5d923 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.057536891Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=65dc1a28-c124-4bed-86cd-bb1d6daa17da name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:52:30 functional-066896 crio[10511]: time="2025-12-02T21:52:30.058148081Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=948033be-9e28-424a-bbda-a82698af2fb7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:35.390597   21894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:35.391601   21894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:35.393079   21894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:35.393612   21894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:35.395118   21894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:56:35 up  3:38,  0 user,  load average: 0.20, 0.17, 0.32
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:56:32 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:33 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 02 21:56:33 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:33 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:33 functional-066896 kubelet[21769]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:33 functional-066896 kubelet[21769]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:33 functional-066896 kubelet[21769]: E1202 21:56:33.742018   21769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:33 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:33 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:34 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 02 21:56:34 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:34 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:34 functional-066896 kubelet[21795]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:34 functional-066896 kubelet[21795]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:34 functional-066896 kubelet[21795]: E1202 21:56:34.466844   21795 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:34 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:34 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:35 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 967.
	Dec 02 21:56:35 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:35 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:35 functional-066896 kubelet[21847]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:35 functional-066896 kubelet[21847]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:35 functional-066896 kubelet[21847]: E1202 21:56:35.224498   21847 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:35 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:35 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (365.51223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-066896 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-066896 apply -f testdata/invalidsvc.yaml: exit status 1 (58.741617ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-066896 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-066896 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-066896 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-066896 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-066896 --alsologtostderr -v=1] stderr:
I1202 21:58:47.104076  507774 out.go:360] Setting OutFile to fd 1 ...
I1202 21:58:47.104231  507774 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:47.104248  507774 out.go:374] Setting ErrFile to fd 2...
I1202 21:58:47.104264  507774 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:47.104531  507774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:58:47.104800  507774 mustload.go:66] Loading cluster: functional-066896
I1202 21:58:47.105234  507774 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:47.105724  507774 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:58:47.122731  507774 host.go:66] Checking if "functional-066896" exists ...
I1202 21:58:47.123082  507774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:58:47.190178  507774 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:47.179793132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:58:47.190295  507774 api_server.go:166] Checking apiserver status ...
I1202 21:58:47.190362  507774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 21:58:47.190407  507774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:58:47.210524  507774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
W1202 21:58:47.320463  507774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 21:58:47.323685  507774 out.go:179] * The control-plane node functional-066896 apiserver is not running: (state=Stopped)
I1202 21:58:47.326448  507774 out.go:179]   To start a cluster, run: "minikube start -p functional-066896"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (324.013007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-066896 ssh sudo umount -f /mount-9p                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh       │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount     │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3281603967/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh       │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh       │ functional-066896 ssh -- ls -la /mount-9p                                                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh       │ functional-066896 ssh sudo umount -f /mount-9p                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount     │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount2 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount     │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount1 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh       │ functional-066896 ssh findmnt -T /mount1                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount     │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount3 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh       │ functional-066896 ssh findmnt -T /mount1                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh       │ functional-066896 ssh findmnt -T /mount2                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh       │ functional-066896 ssh findmnt -T /mount3                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ mount     │ -p functional-066896 --kill=true                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ addons    │ functional-066896 addons list                                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ addons    │ functional-066896 addons list -o json                                                                                                               │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ service   │ functional-066896 service list                                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service   │ functional-066896 service list -o json                                                                                                              │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service   │ functional-066896 service --namespace=default --https --url hello-node                                                                              │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service   │ functional-066896 service hello-node --url --format={{.IP}}                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service   │ functional-066896 service hello-node --url                                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start     │ -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start     │ -p functional-066896 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start     │ -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-066896 --alsologtostderr -v=1                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:58:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:58:46.901861  507730 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:58:46.902020  507730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:46.902049  507730 out.go:374] Setting ErrFile to fd 2...
	I1202 21:58:46.902055  507730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:46.902463  507730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:58:46.902886  507730 out.go:368] Setting JSON to false
	I1202 21:58:46.903818  507730 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13255,"bootTime":1764699472,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:58:46.903890  507730 start.go:143] virtualization:  
	I1202 21:58:46.907131  507730 out.go:179] * [functional-066896] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 21:58:46.910758  507730 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:58:46.910832  507730 notify.go:221] Checking for updates...
	I1202 21:58:46.916328  507730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:58:46.919207  507730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:58:46.922097  507730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:58:46.924927  507730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:58:46.927693  507730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:58:46.931080  507730 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:58:46.931712  507730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:58:46.967128  507730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:58:46.967244  507730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:58:47.036134  507730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:47.026846878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:58:47.036254  507730 docker.go:319] overlay module found
	I1202 21:58:47.039414  507730 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 21:58:47.042260  507730 start.go:309] selected driver: docker
	I1202 21:58:47.042282  507730 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:58:47.042390  507730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:58:47.045971  507730 out.go:203] 
	W1202 21:58:47.048833  507730 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 21:58:47.051708  507730 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.45283707Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=bf00db59-611c-44fb-b66b-5de338fe239d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486207629Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486338707Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486372447Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.31254149Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=02dfde09-63cb-48a9-bc75-2498ded8aebd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338777762Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338914322Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338952624Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364142306Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364305064Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364345213Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.448620533Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=2c172ded-5053-4702-8981-86fe65b3eb5a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473261763Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473491575Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473554164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502089674Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502268679Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502308638Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.270878698Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=6683c882-fed2-46df-a5c6-4c16ad59fbea name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300274442Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300423301Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300466198Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325738621Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325897843Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325952326Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:58:48.388810   24545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:48.389503   24545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:48.391290   24545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:48.391894   24545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:48.393496   24545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:58:48 up  3:40,  0 user,  load average: 0.78, 0.34, 0.36
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:58:45 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:46 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 02 21:58:46 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:46 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:46 functional-066896 kubelet[24401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:46 functional-066896 kubelet[24401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:46 functional-066896 kubelet[24401]: E1202 21:58:46.488518   24401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:46 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:46 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:47 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 02 21:58:47 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:47 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:47 functional-066896 kubelet[24431]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:47 functional-066896 kubelet[24431]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:47 functional-066896 kubelet[24431]: E1202 21:58:47.239561   24431 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:47 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:47 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:47 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1144.
	Dec 02 21:58:47 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:47 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:47 functional-066896 kubelet[24460]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:47 functional-066896 kubelet[24460]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:47 functional-066896 kubelet[24460]: E1202 21:58:47.977734   24460 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:47 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:47 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (317.150691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 status: exit status 2 (317.800139ms)

                                                
                                                
-- stdout --
	functional-066896
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-066896 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (427.485959ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-066896 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 status -o json: exit status 2 (342.738466ms)

                                                
                                                
-- stdout --
	{"Name":"functional-066896","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-066896 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (301.02454ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-066896 ssh cat /mount-9p/test-1764712610748691523                                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh sudo umount -f /mount-9p                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3281603967/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh -- ls -la /mount-9p                                                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh sudo umount -f /mount-9p                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount2 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount1 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount1                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount3 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount1                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh findmnt -T /mount2                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh findmnt -T /mount3                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ mount   │ -p functional-066896 --kill=true                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ addons  │ functional-066896 addons list                                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ addons  │ functional-066896 addons list -o json                                                                                                               │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ service │ functional-066896 service list                                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service │ functional-066896 service list -o json                                                                                                              │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service │ functional-066896 service --namespace=default --https --url hello-node                                                                              │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service │ functional-066896 service hello-node --url --format={{.IP}}                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service │ functional-066896 service hello-node --url                                                                                                          │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start   │ -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start   │ -p functional-066896 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:58:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:58:44.226338  507146 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:58:44.226450  507146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:44.226456  507146 out.go:374] Setting ErrFile to fd 2...
	I1202 21:58:44.226460  507146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:44.226821  507146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:58:44.227252  507146 out.go:368] Setting JSON to false
	I1202 21:58:44.228075  507146 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13253,"bootTime":1764699472,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:58:44.228161  507146 start.go:143] virtualization:  
	I1202 21:58:44.231612  507146 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:58:44.234594  507146 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:58:44.234709  507146 notify.go:221] Checking for updates...
	I1202 21:58:44.240211  507146 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:58:44.243116  507146 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:58:44.245891  507146 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:58:44.248689  507146 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:58:44.251538  507146 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:58:44.254871  507146 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:58:44.255542  507146 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:58:44.284501  507146 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:58:44.284613  507146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:58:44.341902  507146 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:44.332975292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:58:44.342014  507146 docker.go:319] overlay module found
	I1202 21:58:44.345129  507146 out.go:179] * Using the docker driver based on existing profile
	I1202 21:58:44.348021  507146 start.go:309] selected driver: docker
	I1202 21:58:44.348041  507146 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:58:44.348146  507146 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:58:44.348250  507146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:58:44.415724  507146 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:44.407155925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:58:44.416161  507146 cni.go:84] Creating CNI manager for ""
	I1202 21:58:44.416230  507146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:58:44.416269  507146 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:58:44.419437  507146 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.45283707Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=bf00db59-611c-44fb-b66b-5de338fe239d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486207629Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486338707Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486372447Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.31254149Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=02dfde09-63cb-48a9-bc75-2498ded8aebd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338777762Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338914322Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338952624Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364142306Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364305064Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364345213Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.448620533Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=2c172ded-5053-4702-8981-86fe65b3eb5a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473261763Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473491575Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473554164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502089674Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502268679Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502308638Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.270878698Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=6683c882-fed2-46df-a5c6-4c16ad59fbea name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300274442Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300423301Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300466198Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325738621Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325897843Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325952326Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:58:46.430792   24397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:46.431549   24397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:46.433160   24397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:46.433488   24397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:46.434971   24397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:58:46 up  3:40,  0 user,  load average: 0.68, 0.31, 0.35
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:58:44 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:44 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 02 21:58:44 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:44 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:44 functional-066896 kubelet[24265]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:44 functional-066896 kubelet[24265]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:44 functional-066896 kubelet[24265]: E1202 21:58:44.958870   24265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:44 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:44 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:45 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 02 21:58:45 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:45 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:45 functional-066896 kubelet[24300]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:45 functional-066896 kubelet[24300]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:45 functional-066896 kubelet[24300]: E1202 21:58:45.716750   24300 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:45 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:45 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:46 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 02 21:58:46 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:46 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:46 functional-066896 kubelet[24401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:46 functional-066896 kubelet[24401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:46 functional-066896 kubelet[24401]: E1202 21:58:46.488518   24401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:46 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:46 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (325.076812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-066896 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-066896 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.091576ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-066896 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-066896 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-066896 describe po hello-node-connect: exit status 1 (67.507512ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-066896 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-066896 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-066896 logs -l app=hello-node-connect: exit status 1 (59.305614ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-066896 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-066896 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-066896 describe svc hello-node-connect: exit status 1 (68.387226ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-066896 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (303.661971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-066896 ssh -n functional-066896 sudo cat /tmp/does/not/exist/cp-test.txt                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh echo hello                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh cat /etc/hostname                                                                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001:/mount-9p --alsologtostderr -v=1              │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh -- ls -la /mount-9p                                                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh cat /mount-9p/test-1764712610748691523                                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh sudo umount -f /mount-9p                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3281603967/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh -- ls -la /mount-9p                                                                                                           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh sudo umount -f /mount-9p                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount2 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount1 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount1                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ mount   │ -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount3 --alsologtostderr -v=1                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh findmnt -T /mount1                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh findmnt -T /mount2                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh     │ functional-066896 ssh findmnt -T /mount3                                                                                                            │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ mount   │ -p functional-066896 --kill=true                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ addons  │ functional-066896 addons list                                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ addons  │ functional-066896 addons list -o json                                                                                                               │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:44:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:44:17.650988  488914 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:44:17.651127  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651131  488914 out.go:374] Setting ErrFile to fd 2...
	I1202 21:44:17.651134  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651388  488914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:44:17.651725  488914 out.go:368] Setting JSON to false
	I1202 21:44:17.652562  488914 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12386,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:44:17.652624  488914 start.go:143] virtualization:  
	I1202 21:44:17.655925  488914 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:44:17.658824  488914 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:44:17.658955  488914 notify.go:221] Checking for updates...
	I1202 21:44:17.664772  488914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:44:17.667672  488914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:44:17.670581  488914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:44:17.673492  488914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:44:17.676281  488914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:44:17.679520  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:17.679615  488914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:44:17.708368  488914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:44:17.708467  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.767956  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.759221256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.768046  488914 docker.go:319] overlay module found
	I1202 21:44:17.771104  488914 out.go:179] * Using the docker driver based on existing profile
	I1202 21:44:17.773889  488914 start.go:309] selected driver: docker
	I1202 21:44:17.773897  488914 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.773983  488914 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:44:17.774077  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.834934  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.825868601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.835402  488914 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:44:17.835426  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:17.835482  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:17.835523  488914 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.838587  488914 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:44:17.841458  488914 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:44:17.844370  488914 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:44:17.847200  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:17.847277  488914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:44:17.866587  488914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:44:17.866598  488914 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:44:17.909149  488914 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:44:18.073530  488914 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:44:18.073687  488914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:44:18.073803  488914 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073909  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:44:18.073917  488914 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.617µs
	I1202 21:44:18.073927  488914 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:44:18.073937  488914 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:44:18.073939  488914 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073964  488914 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073980  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:44:18.073986  488914 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.935µs
	I1202 21:44:18.073991  488914 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074001  488914 start.go:364] duration metric: took 25.551µs to acquireMachinesLock for "functional-066896"
	I1202 21:44:18.074000  488914 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074014  488914 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:44:18.074021  488914 fix.go:54] fixHost starting: 
	I1202 21:44:18.074029  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:44:18.074034  488914 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.037µs
	I1202 21:44:18.074039  488914 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074056  488914 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074084  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:44:18.074089  488914 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 41.329µs
	I1202 21:44:18.074093  488914 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074101  488914 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074151  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:44:18.074156  488914 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 55.623µs
	I1202 21:44:18.074160  488914 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074169  488914 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074193  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:44:18.074211  488914 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.457µs
	I1202 21:44:18.074217  488914 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:44:18.074232  488914 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074258  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:44:18.074262  488914 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 39.032µs
	I1202 21:44:18.074267  488914 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:44:18.074276  488914 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:44:18.074274  488914 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074311  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:44:18.074315  488914 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.174µs
	I1202 21:44:18.074320  488914 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:44:18.074327  488914 cache.go:87] Successfully saved all images to host disk.
	I1202 21:44:18.091506  488914 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:44:18.091527  488914 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:44:18.096748  488914 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:44:18.096772  488914 machine.go:94] provisionDockerMachine start ...
	I1202 21:44:18.096874  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.114456  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.114786  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.114793  488914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:44:18.266794  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.266809  488914 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:44:18.266875  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.286274  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.286575  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.286589  488914 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:44:18.448160  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.448232  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.466449  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.466766  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.466781  488914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:44:18.615365  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:44:18.615380  488914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:44:18.615404  488914 ubuntu.go:190] setting up certificates
	I1202 21:44:18.615412  488914 provision.go:84] configureAuth start
	I1202 21:44:18.615471  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:18.633069  488914 provision.go:143] copyHostCerts
	I1202 21:44:18.633141  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:44:18.633158  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:44:18.633234  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:44:18.633330  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:44:18.633334  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:44:18.633359  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:44:18.633406  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:44:18.633410  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:44:18.633430  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:44:18.633475  488914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:44:19.174279  488914 provision.go:177] copyRemoteCerts
	I1202 21:44:19.174331  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:44:19.174370  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.190978  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.294889  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:44:19.312628  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:44:19.330566  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:44:19.347713  488914 provision.go:87] duration metric: took 732.278587ms to configureAuth
	I1202 21:44:19.347730  488914 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:44:19.347935  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:19.348040  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.364877  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:19.365168  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:19.365182  488914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:44:19.733535  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:44:19.733548  488914 machine.go:97] duration metric: took 1.636769982s to provisionDockerMachine
	I1202 21:44:19.733558  488914 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:44:19.733570  488914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:44:19.733637  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:44:19.733700  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.752520  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.854929  488914 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:44:19.858053  488914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:44:19.858070  488914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:44:19.858080  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:44:19.858131  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:44:19.858206  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:44:19.858277  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:44:19.858317  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:44:19.865625  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:19.882511  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:44:19.899291  488914 start.go:296] duration metric: took 165.718396ms for postStartSetup
	I1202 21:44:19.899374  488914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:44:19.899409  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.915689  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.016990  488914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:44:20.022912  488914 fix.go:56] duration metric: took 1.948885968s for fixHost
	I1202 21:44:20.022943  488914 start.go:83] releasing machines lock for "functional-066896", held for 1.948933476s
	I1202 21:44:20.023059  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:20.041984  488914 ssh_runner.go:195] Run: cat /version.json
	I1202 21:44:20.042007  488914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:44:20.042033  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.042071  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.064148  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.064737  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.168080  488914 ssh_runner.go:195] Run: systemctl --version
	I1202 21:44:20.290437  488914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:44:20.326220  488914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:44:20.331076  488914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:44:20.331137  488914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:44:20.338791  488914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:44:20.338805  488914 start.go:496] detecting cgroup driver to use...
	I1202 21:44:20.338835  488914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:44:20.338881  488914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:44:20.354128  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:44:20.367183  488914 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:44:20.367236  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:44:20.383031  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:44:20.396225  488914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:44:20.505938  488914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:44:20.631853  488914 docker.go:234] disabling docker service ...
	I1202 21:44:20.631909  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:44:20.647481  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:44:20.660948  488914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:44:20.779859  488914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:44:20.901936  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:44:20.922332  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:44:20.937696  488914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:44:20.937766  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.947525  488914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:44:20.947591  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.956868  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.966757  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.976111  488914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:44:20.984116  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.993108  488914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.003934  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.015041  488914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:44:21.023179  488914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:44:21.030977  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.150076  488914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:44:21.327555  488914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:44:21.327622  488914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:44:21.331404  488914 start.go:564] Will wait 60s for crictl version
	I1202 21:44:21.331471  488914 ssh_runner.go:195] Run: which crictl
	I1202 21:44:21.335016  488914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:44:21.359060  488914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:44:21.359133  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.387110  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.420984  488914 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:44:21.423772  488914 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:44:21.440341  488914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:44:21.447237  488914 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:44:21.449900  488914 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:44:21.450046  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:21.450110  488914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:44:21.483620  488914 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:44:21.483631  488914 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:44:21.483637  488914 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:44:21.483726  488914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:44:21.483815  488914 ssh_runner.go:195] Run: crio config
	I1202 21:44:21.540157  488914 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:44:21.540183  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:21.540190  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:21.540200  488914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:44:21.540251  488914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:44:21.540412  488914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:44:21.540486  488914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:44:21.551296  488914 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:44:21.551378  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:44:21.559159  488914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:44:21.572470  488914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:44:21.586886  488914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 21:44:21.600852  488914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:44:21.604702  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.760401  488914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:44:22.412975  488914 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:44:22.412987  488914 certs.go:195] generating shared ca certs ...
	I1202 21:44:22.413002  488914 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:44:22.413155  488914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:44:22.413195  488914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:44:22.413201  488914 certs.go:257] generating profile certs ...
	I1202 21:44:22.413284  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:44:22.413360  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:44:22.413398  488914 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:44:22.413511  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:44:22.413543  488914 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:44:22.413552  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:44:22.413581  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:44:22.413604  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:44:22.413626  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:44:22.413674  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:22.414299  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:44:22.434951  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:44:22.453111  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:44:22.472098  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:44:22.493256  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:44:22.511523  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:44:22.529485  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:44:22.547667  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:44:22.565085  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:44:22.583650  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:44:22.601678  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:44:22.619263  488914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:44:22.631918  488914 ssh_runner.go:195] Run: openssl version
	I1202 21:44:22.638008  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:44:22.646246  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.649963  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.650030  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.691947  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:44:22.699744  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:44:22.707750  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711346  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711410  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.752553  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:44:22.760779  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:44:22.769102  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.772990  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.773054  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.817125  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:44:22.825521  488914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:44:22.829263  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:44:22.870268  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:44:22.912651  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:44:22.953793  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:44:22.994690  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:44:23.036128  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:44:23.077233  488914 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:23.077311  488914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:44:23.077384  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.104728  488914 cri.go:89] found id: ""
	I1202 21:44:23.104787  488914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:44:23.112693  488914 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:44:23.112702  488914 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:44:23.112754  488914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:44:23.120199  488914 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.120715  488914 kubeconfig.go:125] found "functional-066896" server: "https://192.168.49.2:8441"
	I1202 21:44:23.122004  488914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:44:23.129849  488914 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:29:46.719862797 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:44:21.596345133 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:44:23.129868  488914 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:44:23.129878  488914 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 21:44:23.129934  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.164567  488914 cri.go:89] found id: ""
	I1202 21:44:23.164629  488914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:44:23.192730  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:44:23.201193  488914 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5576 Dec  2 21:33 /etc/kubernetes/scheduler.conf
	
	I1202 21:44:23.201254  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:44:23.209100  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:44:23.217145  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.217201  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:44:23.224901  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.232713  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.232773  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.240473  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:44:23.248046  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.248102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:44:23.255508  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:44:23.263587  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:23.311842  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.167347  488914 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.855478015s)
	I1202 21:44:25.167416  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.367575  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.433420  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.478422  488914 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:44:25.478494  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:25.978693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.479461  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.978647  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.479295  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.979313  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.479548  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.979300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.478679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.479305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.979214  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.478682  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.979440  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.478676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.978971  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.478687  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.479399  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.978686  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.479541  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.979365  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.478985  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.978766  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.478652  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.979222  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.478642  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.979289  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.479367  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.978641  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.478896  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.479195  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.979035  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.478597  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.978688  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.478820  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.979413  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.478702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.979325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.478716  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.979514  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.479502  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.978679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.479602  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.978676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.979208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.479262  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.978947  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.478848  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.979340  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.478943  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.979631  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.479208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.978824  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.478692  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.978621  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.479381  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.979217  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.479300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.979309  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.478661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.978590  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.478589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.979149  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.479524  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.979613  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.979556  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.479181  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.479560  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.979258  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.478693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.979403  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.479145  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.979083  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.478795  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.979236  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.478753  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.479607  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.479438  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.978717  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.478907  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.979407  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.478991  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.979216  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.479168  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.979304  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.479589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.979207  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.478756  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.979408  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.979186  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.478671  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.979155  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.478781  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.478767  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.978709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.478610  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.979395  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.479136  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.978666  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.479565  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.978675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.979164  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.478675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.978579  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:25.479540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:25.479652  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:25.504711  488914 cri.go:89] found id: ""
	I1202 21:45:25.504725  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.504732  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:25.504738  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:25.504795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:25.529752  488914 cri.go:89] found id: ""
	I1202 21:45:25.529766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.529773  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:25.529778  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:25.529838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:25.555068  488914 cri.go:89] found id: ""
	I1202 21:45:25.555082  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.555089  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:25.555095  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:25.555154  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:25.583996  488914 cri.go:89] found id: ""
	I1202 21:45:25.584010  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.584017  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:25.584023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:25.584083  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:25.613039  488914 cri.go:89] found id: ""
	I1202 21:45:25.613053  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.613060  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:25.613065  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:25.613125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:25.638912  488914 cri.go:89] found id: ""
	I1202 21:45:25.638926  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.638933  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:25.638938  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:25.639016  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:25.663753  488914 cri.go:89] found id: ""
	I1202 21:45:25.663766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.663773  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:25.663781  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:25.663793  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:25.693023  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:25.693040  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:25.759763  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:25.759782  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:25.774658  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:25.774679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:25.838644  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:25.838656  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:25.838667  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.417551  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:28.428847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:28.428924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:28.461391  488914 cri.go:89] found id: ""
	I1202 21:45:28.461406  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.461413  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:28.461418  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:28.461487  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:28.493536  488914 cri.go:89] found id: ""
	I1202 21:45:28.493549  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.493556  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:28.493561  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:28.493625  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:28.521334  488914 cri.go:89] found id: ""
	I1202 21:45:28.521347  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.521354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:28.521360  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:28.521429  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:28.546459  488914 cri.go:89] found id: ""
	I1202 21:45:28.546472  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.546479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:28.546484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:28.546558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:28.573310  488914 cri.go:89] found id: ""
	I1202 21:45:28.573325  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.573332  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:28.573338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:28.573398  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:28.603231  488914 cri.go:89] found id: ""
	I1202 21:45:28.603245  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.603252  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:28.603259  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:28.603339  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:28.628995  488914 cri.go:89] found id: ""
	I1202 21:45:28.629009  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.629016  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:28.629024  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:28.629034  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:28.694293  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:28.694315  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:28.709309  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:28.709326  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:28.772742  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:28.772763  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:28.772775  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.851065  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:28.851099  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.383921  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:31.394465  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:31.394529  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:31.432030  488914 cri.go:89] found id: ""
	I1202 21:45:31.432046  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.432053  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:31.432061  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:31.432122  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:31.469314  488914 cri.go:89] found id: ""
	I1202 21:45:31.469327  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.469334  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:31.469339  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:31.469399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:31.495701  488914 cri.go:89] found id: ""
	I1202 21:45:31.495715  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.495721  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:31.495726  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:31.495783  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:31.525459  488914 cri.go:89] found id: ""
	I1202 21:45:31.525472  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.525479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:31.525484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:31.525548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:31.551543  488914 cri.go:89] found id: ""
	I1202 21:45:31.551557  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.551564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:31.551569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:31.551635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:31.576459  488914 cri.go:89] found id: ""
	I1202 21:45:31.576473  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.576479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:31.576485  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:31.576543  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:31.605711  488914 cri.go:89] found id: ""
	I1202 21:45:31.605726  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.605733  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:31.605741  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:31.605752  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.637077  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:31.637094  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:31.704571  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:31.704592  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:31.719615  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:31.719640  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:31.784987  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:31.785007  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:31.785019  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.367127  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:34.377127  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:34.377203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:34.402736  488914 cri.go:89] found id: ""
	I1202 21:45:34.402750  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.402757  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:34.402769  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:34.402864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:34.443728  488914 cri.go:89] found id: ""
	I1202 21:45:34.443742  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.443749  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:34.443754  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:34.443815  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:34.479956  488914 cri.go:89] found id: ""
	I1202 21:45:34.479970  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.479985  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:34.479991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:34.480055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:34.508482  488914 cri.go:89] found id: ""
	I1202 21:45:34.508503  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.508510  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:34.508516  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:34.508573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:34.534801  488914 cri.go:89] found id: ""
	I1202 21:45:34.534814  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.534821  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:34.534826  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:34.534884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:34.559463  488914 cri.go:89] found id: ""
	I1202 21:45:34.559477  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.559484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:34.559490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:34.559551  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:34.584528  488914 cri.go:89] found id: ""
	I1202 21:45:34.584543  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.584550  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:34.584557  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:34.584568  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:34.651241  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:34.651261  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:34.666228  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:34.666244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:34.728086  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:34.728108  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:34.728120  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.804348  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:34.804369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:37.332022  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:37.341829  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:37.341888  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:37.366064  488914 cri.go:89] found id: ""
	I1202 21:45:37.366078  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.366085  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:37.366090  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:37.366147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:37.395570  488914 cri.go:89] found id: ""
	I1202 21:45:37.395584  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.395590  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:37.395595  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:37.395663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:37.429125  488914 cri.go:89] found id: ""
	I1202 21:45:37.429140  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.429147  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:37.429161  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:37.429218  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:37.462030  488914 cri.go:89] found id: ""
	I1202 21:45:37.462054  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.462062  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:37.462080  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:37.462152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:37.490229  488914 cri.go:89] found id: ""
	I1202 21:45:37.490242  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.490260  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:37.490266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:37.490349  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:37.515496  488914 cri.go:89] found id: ""
	I1202 21:45:37.515510  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.515516  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:37.515522  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:37.515578  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:37.544546  488914 cri.go:89] found id: ""
	I1202 21:45:37.544560  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.544567  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:37.544575  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:37.544586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:37.617995  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:37.618023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:37.634282  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:37.634307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:37.704089  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:37.704099  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:37.704110  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:37.780382  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:37.780402  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.308261  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:40.318898  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:40.318954  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:40.351388  488914 cri.go:89] found id: ""
	I1202 21:45:40.351403  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.351409  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:40.351415  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:40.351476  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:40.376844  488914 cri.go:89] found id: ""
	I1202 21:45:40.376857  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.376864  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:40.376869  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:40.376927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:40.400732  488914 cri.go:89] found id: ""
	I1202 21:45:40.400745  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.400752  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:40.400757  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:40.400816  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:40.446048  488914 cri.go:89] found id: ""
	I1202 21:45:40.446061  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.446067  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:40.446075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:40.446134  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:40.475997  488914 cri.go:89] found id: ""
	I1202 21:45:40.476011  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.476018  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:40.476023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:40.476081  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:40.501615  488914 cri.go:89] found id: ""
	I1202 21:45:40.501629  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.501636  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:40.501642  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:40.501705  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:40.526763  488914 cri.go:89] found id: ""
	I1202 21:45:40.526809  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.526816  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:40.526831  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:40.526842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:40.542072  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:40.542088  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:40.603416  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:40.603427  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:40.603437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:40.683775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:40.683797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.710561  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:40.710577  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.275783  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:43.286075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:43.286135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:43.312011  488914 cri.go:89] found id: ""
	I1202 21:45:43.312026  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.312033  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:43.312039  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:43.312099  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:43.337316  488914 cri.go:89] found id: ""
	I1202 21:45:43.337330  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.337337  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:43.337359  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:43.337418  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:43.369627  488914 cri.go:89] found id: ""
	I1202 21:45:43.369641  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.369648  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:43.369653  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:43.369714  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:43.395672  488914 cri.go:89] found id: ""
	I1202 21:45:43.395686  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.395693  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:43.395698  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:43.395757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:43.436721  488914 cri.go:89] found id: ""
	I1202 21:45:43.436735  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.436742  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:43.436747  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:43.436808  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:43.468979  488914 cri.go:89] found id: ""
	I1202 21:45:43.468993  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.469008  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:43.469014  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:43.469084  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:43.500825  488914 cri.go:89] found id: ""
	I1202 21:45:43.500839  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.500846  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:43.500854  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:43.500864  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:43.537110  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:43.537127  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.604154  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:43.604172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:43.619529  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:43.619546  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:43.684232  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:43.684242  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:43.684253  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.262533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:46.273030  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:46.273094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:46.298023  488914 cri.go:89] found id: ""
	I1202 21:45:46.298039  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.298045  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:46.298051  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:46.298109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:46.327737  488914 cri.go:89] found id: ""
	I1202 21:45:46.327752  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.327760  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:46.327769  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:46.327834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:46.353980  488914 cri.go:89] found id: ""
	I1202 21:45:46.353994  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.354003  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:46.354008  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:46.354073  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:46.380386  488914 cri.go:89] found id: ""
	I1202 21:45:46.380400  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.380406  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:46.380412  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:46.380480  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:46.406595  488914 cri.go:89] found id: ""
	I1202 21:45:46.406609  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.406616  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:46.406621  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:46.406679  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:46.441216  488914 cri.go:89] found id: ""
	I1202 21:45:46.441230  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.441237  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:46.441242  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:46.441305  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:46.473258  488914 cri.go:89] found id: ""
	I1202 21:45:46.473272  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.473279  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:46.473287  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:46.473298  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:46.490441  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:46.490458  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:46.554481  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:46.554490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:46.554501  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.631777  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:46.631800  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:46.660339  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:46.660355  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:49.231885  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:49.243758  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:49.243823  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:49.268714  488914 cri.go:89] found id: ""
	I1202 21:45:49.268728  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.268735  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:49.268741  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:49.268799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:49.293827  488914 cri.go:89] found id: ""
	I1202 21:45:49.293842  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.293849  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:49.293854  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:49.293919  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:49.319633  488914 cri.go:89] found id: ""
	I1202 21:45:49.319647  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.319654  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:49.319661  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:49.319720  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:49.350167  488914 cri.go:89] found id: ""
	I1202 21:45:49.350181  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.350188  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:49.350193  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:49.350252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:49.375814  488914 cri.go:89] found id: ""
	I1202 21:45:49.375828  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.375835  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:49.375841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:49.375905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:49.400638  488914 cri.go:89] found id: ""
	I1202 21:45:49.400657  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.400664  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:49.400670  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:49.400727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:49.453654  488914 cri.go:89] found id: ""
	I1202 21:45:49.453668  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.453680  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:49.453689  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:49.453699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:49.479146  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:49.479161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:49.548448  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:49.548457  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:49.548468  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:49.628739  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:49.628759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:49.658161  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:49.658177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:52.223612  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:52.234793  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:52.234899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:52.265577  488914 cri.go:89] found id: ""
	I1202 21:45:52.265591  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.265598  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:52.265603  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:52.265663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:52.292373  488914 cri.go:89] found id: ""
	I1202 21:45:52.292387  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.292394  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:52.292399  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:52.292466  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:52.317157  488914 cri.go:89] found id: ""
	I1202 21:45:52.317171  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.317178  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:52.317183  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:52.317240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:52.347843  488914 cri.go:89] found id: ""
	I1202 21:45:52.347856  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.347863  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:52.347868  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:52.347927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:52.372874  488914 cri.go:89] found id: ""
	I1202 21:45:52.372889  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.372895  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:52.372900  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:52.372962  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:52.398247  488914 cri.go:89] found id: ""
	I1202 21:45:52.398260  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.398267  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:52.398273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:52.398330  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:52.445693  488914 cri.go:89] found id: ""
	I1202 21:45:52.445706  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.445713  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:52.445721  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:52.445732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:52.465150  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:52.465167  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:52.540766  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:52.540776  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:52.540797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:52.618862  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:52.618882  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:52.648548  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:52.648565  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.221074  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:55.231158  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:55.231215  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:55.256269  488914 cri.go:89] found id: ""
	I1202 21:45:55.256282  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.256289  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:55.256294  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:55.256371  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:55.281345  488914 cri.go:89] found id: ""
	I1202 21:45:55.281360  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.281367  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:55.281372  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:55.281430  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:55.306779  488914 cri.go:89] found id: ""
	I1202 21:45:55.306793  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.306799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:55.306805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:55.306865  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:55.333304  488914 cri.go:89] found id: ""
	I1202 21:45:55.333318  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.333325  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:55.333333  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:55.333393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:55.358550  488914 cri.go:89] found id: ""
	I1202 21:45:55.358563  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.358570  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:55.358575  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:55.358638  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:55.387929  488914 cri.go:89] found id: ""
	I1202 21:45:55.387943  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.387951  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:55.387957  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:55.388020  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:55.426649  488914 cri.go:89] found id: ""
	I1202 21:45:55.426663  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.426670  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:55.426678  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:55.426687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:55.519746  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:55.519772  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:55.554225  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:55.554241  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.622464  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:55.622484  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:55.638187  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:55.638213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:55.703154  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.203385  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:58.213686  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:58.213750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:58.239330  488914 cri.go:89] found id: ""
	I1202 21:45:58.239344  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.239351  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:58.239356  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:58.239416  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:58.264371  488914 cri.go:89] found id: ""
	I1202 21:45:58.264385  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.264392  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:58.264397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:58.264454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:58.289420  488914 cri.go:89] found id: ""
	I1202 21:45:58.289434  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.289441  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:58.289446  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:58.289504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:58.317750  488914 cri.go:89] found id: ""
	I1202 21:45:58.317764  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.317772  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:58.317777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:58.317834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:58.341672  488914 cri.go:89] found id: ""
	I1202 21:45:58.341687  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.341694  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:58.341699  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:58.341764  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:58.366074  488914 cri.go:89] found id: ""
	I1202 21:45:58.366088  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.366094  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:58.366099  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:58.366160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:58.390704  488914 cri.go:89] found id: ""
	I1202 21:45:58.390718  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.390724  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:58.390741  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:58.390751  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:58.474575  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.474586  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:58.474598  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:58.558574  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:58.558604  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:58.589663  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:58.589680  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:58.656150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:58.656169  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.173977  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:01.186201  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:01.186270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:01.213408  488914 cri.go:89] found id: ""
	I1202 21:46:01.213424  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.213430  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:01.213436  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:01.213502  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:01.239993  488914 cri.go:89] found id: ""
	I1202 21:46:01.240007  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.240014  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:01.240019  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:01.240079  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:01.266106  488914 cri.go:89] found id: ""
	I1202 21:46:01.266120  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.266127  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:01.266132  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:01.266194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:01.292600  488914 cri.go:89] found id: ""
	I1202 21:46:01.292614  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.292621  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:01.292627  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:01.292689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:01.318438  488914 cri.go:89] found id: ""
	I1202 21:46:01.318453  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.318460  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:01.318466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:01.318530  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:01.344830  488914 cri.go:89] found id: ""
	I1202 21:46:01.344843  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.344850  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:01.344856  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:01.344914  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:01.370509  488914 cri.go:89] found id: ""
	I1202 21:46:01.370523  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.370534  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:01.370541  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:01.370551  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:01.400108  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:01.400123  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:01.484583  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:01.484603  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.501311  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:01.501329  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:01.571182  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:01.571193  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:01.571204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.148935  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:04.159286  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:04.159346  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:04.191266  488914 cri.go:89] found id: ""
	I1202 21:46:04.191279  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.191286  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:04.191291  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:04.191350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:04.217195  488914 cri.go:89] found id: ""
	I1202 21:46:04.217209  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.217216  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:04.217221  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:04.217285  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:04.243674  488914 cri.go:89] found id: ""
	I1202 21:46:04.243689  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.243696  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:04.243701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:04.243760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:04.269892  488914 cri.go:89] found id: ""
	I1202 21:46:04.269905  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.269921  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:04.269927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:04.269998  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:04.296688  488914 cri.go:89] found id: ""
	I1202 21:46:04.296703  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.296711  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:04.296717  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:04.296785  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:04.322967  488914 cri.go:89] found id: ""
	I1202 21:46:04.322981  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.323017  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:04.323023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:04.323091  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:04.348936  488914 cri.go:89] found id: ""
	I1202 21:46:04.348956  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.348963  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:04.348972  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:04.348981  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:04.415190  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:04.415209  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:04.431456  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:04.431472  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:04.504661  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:04.504671  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:04.504682  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.581468  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:04.581487  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:07.110404  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:07.120667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:07.120727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:07.145924  488914 cri.go:89] found id: ""
	I1202 21:46:07.145938  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.145945  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:07.145950  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:07.146010  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:07.171187  488914 cri.go:89] found id: ""
	I1202 21:46:07.171200  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.171207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:07.171212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:07.171270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:07.197187  488914 cri.go:89] found id: ""
	I1202 21:46:07.197201  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.197208  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:07.197213  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:07.197272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:07.222713  488914 cri.go:89] found id: ""
	I1202 21:46:07.222728  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.222735  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:07.222740  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:07.222800  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:07.249213  488914 cri.go:89] found id: ""
	I1202 21:46:07.249226  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.249233  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:07.249239  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:07.249301  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:07.275464  488914 cri.go:89] found id: ""
	I1202 21:46:07.275478  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.275484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:07.275490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:07.275546  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:07.305137  488914 cri.go:89] found id: ""
	I1202 21:46:07.305151  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.305166  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:07.305174  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:07.305187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:07.370440  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:07.370459  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:07.386336  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:07.386354  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:07.458373  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:07.458383  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:07.458395  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:07.542802  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:07.542822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:10.076833  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:10.087724  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:10.087819  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:10.114700  488914 cri.go:89] found id: ""
	I1202 21:46:10.114714  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.114722  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:10.114728  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:10.114794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:10.140632  488914 cri.go:89] found id: ""
	I1202 21:46:10.140646  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.140652  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:10.140658  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:10.140715  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:10.169820  488914 cri.go:89] found id: ""
	I1202 21:46:10.169834  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.169841  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:10.169850  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:10.169911  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:10.195172  488914 cri.go:89] found id: ""
	I1202 21:46:10.195186  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.195193  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:10.195199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:10.195262  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:10.229303  488914 cri.go:89] found id: ""
	I1202 21:46:10.229317  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.229324  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:10.229330  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:10.229392  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:10.257081  488914 cri.go:89] found id: ""
	I1202 21:46:10.257096  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.257102  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:10.257108  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:10.257168  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:10.283246  488914 cri.go:89] found id: ""
	I1202 21:46:10.283259  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.283267  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:10.283274  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:10.283284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:10.351168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:10.351187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:10.366368  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:10.366385  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:10.438623  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:10.438633  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:10.438646  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:10.516775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:10.516796  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:13.045661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:13.056197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:13.056259  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:13.087662  488914 cri.go:89] found id: ""
	I1202 21:46:13.087675  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.087682  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:13.087688  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:13.087748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:13.113347  488914 cri.go:89] found id: ""
	I1202 21:46:13.113361  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.113368  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:13.113373  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:13.113432  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:13.139083  488914 cri.go:89] found id: ""
	I1202 21:46:13.139098  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.139105  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:13.139110  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:13.139181  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:13.165107  488914 cri.go:89] found id: ""
	I1202 21:46:13.165121  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.165128  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:13.165133  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:13.165196  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:13.190075  488914 cri.go:89] found id: ""
	I1202 21:46:13.190090  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.190107  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:13.190113  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:13.190180  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:13.219255  488914 cri.go:89] found id: ""
	I1202 21:46:13.219269  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.219276  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:13.219281  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:13.219342  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:13.245328  488914 cri.go:89] found id: ""
	I1202 21:46:13.245342  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.245350  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:13.245358  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:13.245369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:13.310150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:13.310168  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:13.325530  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:13.325550  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:13.389916  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:13.389926  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:13.389938  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:13.474064  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:13.474083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:16.007285  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:16.018077  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:16.018147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:16.048444  488914 cri.go:89] found id: ""
	I1202 21:46:16.048458  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.048465  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:16.048477  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:16.048539  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:16.075066  488914 cri.go:89] found id: ""
	I1202 21:46:16.075079  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.075085  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:16.075090  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:16.075152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:16.100648  488914 cri.go:89] found id: ""
	I1202 21:46:16.100662  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.100669  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:16.100674  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:16.100732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:16.131449  488914 cri.go:89] found id: ""
	I1202 21:46:16.131463  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.131470  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:16.131475  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:16.131534  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:16.158249  488914 cri.go:89] found id: ""
	I1202 21:46:16.158263  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.158270  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:16.158276  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:16.158340  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:16.183613  488914 cri.go:89] found id: ""
	I1202 21:46:16.183627  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.183633  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:16.183641  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:16.183702  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:16.209461  488914 cri.go:89] found id: ""
	I1202 21:46:16.209475  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.209483  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:16.209490  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:16.209500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:16.275500  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:16.275520  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:16.291181  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:16.291196  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:16.361346  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:16.361356  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:16.361368  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:16.437676  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:16.437697  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:18.967950  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:18.977983  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:18.978057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:19.007682  488914 cri.go:89] found id: ""
	I1202 21:46:19.007706  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.007714  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:19.007720  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:19.007794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:19.033939  488914 cri.go:89] found id: ""
	I1202 21:46:19.033961  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.033969  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:19.033975  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:19.034042  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:19.059516  488914 cri.go:89] found id: ""
	I1202 21:46:19.059531  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.059544  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:19.059550  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:19.059616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:19.086051  488914 cri.go:89] found id: ""
	I1202 21:46:19.086065  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.086072  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:19.086078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:19.086135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:19.110886  488914 cri.go:89] found id: ""
	I1202 21:46:19.110899  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.110906  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:19.110911  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:19.110969  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:19.137589  488914 cri.go:89] found id: ""
	I1202 21:46:19.137603  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.137610  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:19.137615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:19.137673  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:19.162755  488914 cri.go:89] found id: ""
	I1202 21:46:19.162769  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.162776  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:19.162784  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:19.162794  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:19.189873  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:19.189888  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:19.255357  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:19.255375  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:19.270844  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:19.270861  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:19.340061  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:19.340072  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:19.340089  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:21.925504  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:21.935839  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:21.935899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:21.960350  488914 cri.go:89] found id: ""
	I1202 21:46:21.960363  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.960370  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:21.960375  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:21.960434  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:21.986080  488914 cri.go:89] found id: ""
	I1202 21:46:21.986097  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.986105  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:21.986112  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:21.986174  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:22.014687  488914 cri.go:89] found id: ""
	I1202 21:46:22.014702  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.014709  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:22.014715  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:22.014778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:22.042230  488914 cri.go:89] found id: ""
	I1202 21:46:22.042245  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.042252  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:22.042257  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:22.042320  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:22.072112  488914 cri.go:89] found id: ""
	I1202 21:46:22.072126  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.072134  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:22.072139  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:22.072210  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:22.098531  488914 cri.go:89] found id: ""
	I1202 21:46:22.098555  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.098562  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:22.098568  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:22.098649  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:22.124074  488914 cri.go:89] found id: ""
	I1202 21:46:22.124088  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.124095  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:22.124102  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:22.124112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:22.190291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:22.190311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:22.205264  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:22.205283  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:22.273286  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:22.273308  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:22.273321  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:22.349070  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:22.349090  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:24.882662  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:24.893199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:24.893260  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:24.918892  488914 cri.go:89] found id: ""
	I1202 21:46:24.918906  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.918913  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:24.918918  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:24.918977  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:24.944030  488914 cri.go:89] found id: ""
	I1202 21:46:24.944043  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.944050  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:24.944055  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:24.944115  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:24.969743  488914 cri.go:89] found id: ""
	I1202 21:46:24.969758  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.969765  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:24.969770  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:24.969827  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:25.003432  488914 cri.go:89] found id: ""
	I1202 21:46:25.003449  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.003459  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:25.003466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:25.003573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:25.030965  488914 cri.go:89] found id: ""
	I1202 21:46:25.030979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.030985  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:25.030991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:25.031072  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:25.057965  488914 cri.go:89] found id: ""
	I1202 21:46:25.057979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.057986  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:25.057991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:25.058048  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:25.085099  488914 cri.go:89] found id: ""
	I1202 21:46:25.085113  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.085129  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:25.085137  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:25.085147  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:25.115538  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:25.115553  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:25.181412  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:25.181432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:25.196691  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:25.196712  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:25.261474  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:25.261490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:25.261500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:27.838685  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:27.849142  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:27.849203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:27.874519  488914 cri.go:89] found id: ""
	I1202 21:46:27.874533  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.874539  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:27.874545  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:27.874603  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:27.900185  488914 cri.go:89] found id: ""
	I1202 21:46:27.900198  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.900207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:27.900212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:27.900270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:27.926179  488914 cri.go:89] found id: ""
	I1202 21:46:27.926202  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.926209  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:27.926215  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:27.926280  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:27.951950  488914 cri.go:89] found id: ""
	I1202 21:46:27.951964  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.951971  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:27.951977  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:27.952034  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:27.976779  488914 cri.go:89] found id: ""
	I1202 21:46:27.976793  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.976799  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:27.976804  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:27.976864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:28.013447  488914 cri.go:89] found id: ""
	I1202 21:46:28.013462  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.013479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:28.013495  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:28.013562  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:28.041485  488914 cri.go:89] found id: ""
	I1202 21:46:28.041508  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.041516  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:28.041524  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:28.041536  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:28.057180  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:28.057197  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:28.121537  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:28.121548  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:28.121559  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:28.197190  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:28.197210  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:28.229525  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:28.229541  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:30.795826  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:30.806266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:30.806329  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:30.834208  488914 cri.go:89] found id: ""
	I1202 21:46:30.834222  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.834229  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:30.834234  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:30.834293  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:30.859664  488914 cri.go:89] found id: ""
	I1202 21:46:30.859678  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.859685  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:30.859690  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:30.859748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:30.889034  488914 cri.go:89] found id: ""
	I1202 21:46:30.889048  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.889055  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:30.889061  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:30.889117  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:30.914676  488914 cri.go:89] found id: ""
	I1202 21:46:30.914689  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.914696  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:30.914701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:30.914759  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:30.939761  488914 cri.go:89] found id: ""
	I1202 21:46:30.939774  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.939782  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:30.939787  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:30.939843  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:30.965463  488914 cri.go:89] found id: ""
	I1202 21:46:30.965476  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.965483  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:30.965488  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:30.965545  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:30.990187  488914 cri.go:89] found id: ""
	I1202 21:46:30.990200  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.990206  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:30.990224  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:30.990236  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:31.005797  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:31.005813  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:31.069684  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:31.069694  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:31.069707  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:31.145787  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:31.145809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:31.178743  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:31.178759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:33.744496  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:33.754580  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:33.754651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:33.779528  488914 cri.go:89] found id: ""
	I1202 21:46:33.779541  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.779548  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:33.779554  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:33.779616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:33.804198  488914 cri.go:89] found id: ""
	I1202 21:46:33.804212  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.804219  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:33.804227  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:33.804289  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:33.829645  488914 cri.go:89] found id: ""
	I1202 21:46:33.829659  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.829666  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:33.829675  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:33.829734  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:33.858338  488914 cri.go:89] found id: ""
	I1202 21:46:33.858352  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.858368  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:33.858375  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:33.858433  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:33.884555  488914 cri.go:89] found id: ""
	I1202 21:46:33.884570  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.884578  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:33.884583  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:33.884651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:33.912967  488914 cri.go:89] found id: ""
	I1202 21:46:33.912981  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.912988  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:33.912994  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:33.913055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:33.938088  488914 cri.go:89] found id: ""
	I1202 21:46:33.938102  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.938110  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:33.938118  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:33.938133  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:34.003604  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:34.003631  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:34.022128  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:34.022146  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:34.092004  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:34.092015  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:34.092029  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:34.169499  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:34.169519  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:36.700051  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:36.711435  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:36.711497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:36.738690  488914 cri.go:89] found id: ""
	I1202 21:46:36.738704  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.738711  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:36.738717  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:36.738776  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:36.765789  488914 cri.go:89] found id: ""
	I1202 21:46:36.765802  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.765810  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:36.765815  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:36.765880  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:36.790056  488914 cri.go:89] found id: ""
	I1202 21:46:36.790070  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.790077  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:36.790082  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:36.790138  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:36.818201  488914 cri.go:89] found id: ""
	I1202 21:46:36.818214  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.818221  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:36.818227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:36.818288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:36.845623  488914 cri.go:89] found id: ""
	I1202 21:46:36.845637  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.845644  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:36.845650  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:36.845710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:36.871336  488914 cri.go:89] found id: ""
	I1202 21:46:36.871350  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.871357  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:36.871362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:36.871427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:36.897589  488914 cri.go:89] found id: ""
	I1202 21:46:36.897605  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.897611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:36.897619  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:36.897630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:36.913198  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:36.913213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:36.973711  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:36.973721  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:36.973732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:37.054868  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:37.054889  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:37.083961  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:37.083976  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.651305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:39.662125  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:39.662189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:39.693251  488914 cri.go:89] found id: ""
	I1202 21:46:39.693264  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.693271  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:39.693277  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:39.693333  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:39.720953  488914 cri.go:89] found id: ""
	I1202 21:46:39.720969  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.720976  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:39.720981  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:39.721039  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:39.747423  488914 cri.go:89] found id: ""
	I1202 21:46:39.747436  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.747443  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:39.747448  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:39.747512  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:39.773314  488914 cri.go:89] found id: ""
	I1202 21:46:39.773328  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.773335  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:39.773340  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:39.773396  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:39.801946  488914 cri.go:89] found id: ""
	I1202 21:46:39.801960  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.801966  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:39.801971  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:39.802027  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:39.831169  488914 cri.go:89] found id: ""
	I1202 21:46:39.831182  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.831189  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:39.831195  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:39.831255  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:39.855958  488914 cri.go:89] found id: ""
	I1202 21:46:39.855972  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.855979  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:39.855987  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:39.855997  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.921041  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:39.921076  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:39.936417  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:39.936433  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:40.005449  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:40.005465  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:40.005479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:40.099731  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:40.099754  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.632158  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:42.642592  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:42.642655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:42.680753  488914 cri.go:89] found id: ""
	I1202 21:46:42.680767  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.680774  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:42.680780  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:42.680845  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:42.727033  488914 cri.go:89] found id: ""
	I1202 21:46:42.727047  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.727056  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:42.727062  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:42.727125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:42.753808  488914 cri.go:89] found id: ""
	I1202 21:46:42.753822  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.753829  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:42.753848  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:42.753906  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:42.782178  488914 cri.go:89] found id: ""
	I1202 21:46:42.782192  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.782200  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:42.782206  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:42.782272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:42.807839  488914 cri.go:89] found id: ""
	I1202 21:46:42.807853  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.807860  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:42.807867  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:42.807927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:42.834250  488914 cri.go:89] found id: ""
	I1202 21:46:42.834276  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.834283  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:42.834290  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:42.834355  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:42.861699  488914 cri.go:89] found id: ""
	I1202 21:46:42.861721  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.861728  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:42.861736  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:42.861747  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:42.937587  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:42.937608  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.969352  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:42.969374  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:43.035113  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:43.035138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:43.050909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:43.050924  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:43.116601  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.616905  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:45.627026  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:45.627089  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:45.653296  488914 cri.go:89] found id: ""
	I1202 21:46:45.653311  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.653318  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:45.653323  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:45.653389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:45.685320  488914 cri.go:89] found id: ""
	I1202 21:46:45.685334  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.685342  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:45.685347  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:45.685407  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:45.714439  488914 cri.go:89] found id: ""
	I1202 21:46:45.714453  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.714460  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:45.714466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:45.714524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:45.741650  488914 cri.go:89] found id: ""
	I1202 21:46:45.741665  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.741672  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:45.741678  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:45.741748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:45.768339  488914 cri.go:89] found id: ""
	I1202 21:46:45.768374  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.768381  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:45.768387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:45.768446  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:45.793382  488914 cri.go:89] found id: ""
	I1202 21:46:45.793396  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.793404  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:45.793410  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:45.793470  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:45.821520  488914 cri.go:89] found id: ""
	I1202 21:46:45.821534  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.821541  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:45.821549  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:45.821560  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:45.836636  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:45.836657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:45.903141  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.903152  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:45.903182  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:45.983151  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:45.983172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:46.016509  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:46.016525  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:48.589533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:48.600004  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:48.600063  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:48.624724  488914 cri.go:89] found id: ""
	I1202 21:46:48.624738  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.624745  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:48.624751  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:48.624809  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:48.649307  488914 cri.go:89] found id: ""
	I1202 21:46:48.649322  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.649329  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:48.649335  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:48.649393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:48.689464  488914 cri.go:89] found id: ""
	I1202 21:46:48.689477  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.689484  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:48.689489  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:48.689548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:48.718180  488914 cri.go:89] found id: ""
	I1202 21:46:48.718195  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.718202  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:48.718207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:48.718274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:48.748759  488914 cri.go:89] found id: ""
	I1202 21:46:48.748773  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.748781  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:48.748786  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:48.748847  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:48.773610  488914 cri.go:89] found id: ""
	I1202 21:46:48.773624  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.773631  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:48.773637  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:48.773694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:48.798539  488914 cri.go:89] found id: ""
	I1202 21:46:48.798553  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.798560  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:48.798568  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:48.798580  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:48.813434  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:48.813450  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:48.873005  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:48.873016  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:48.873027  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:48.949124  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:48.949143  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:48.981243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:48.981259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.549061  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:51.558950  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:51.559026  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:51.583587  488914 cri.go:89] found id: ""
	I1202 21:46:51.583601  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.583608  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:51.583614  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:51.583674  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:51.609150  488914 cri.go:89] found id: ""
	I1202 21:46:51.609163  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.609170  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:51.609175  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:51.609237  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:51.634897  488914 cri.go:89] found id: ""
	I1202 21:46:51.634910  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.634917  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:51.634922  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:51.634980  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:51.665746  488914 cri.go:89] found id: ""
	I1202 21:46:51.665760  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.665766  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:51.665772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:51.665830  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:51.704219  488914 cri.go:89] found id: ""
	I1202 21:46:51.704233  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.704240  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:51.704246  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:51.704310  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:51.736171  488914 cri.go:89] found id: ""
	I1202 21:46:51.736194  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.736202  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:51.736207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:51.736274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:51.765446  488914 cri.go:89] found id: ""
	I1202 21:46:51.765469  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.765476  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:51.765484  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:51.765494  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:51.792551  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:51.792566  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.857688  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:51.857706  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:51.873199  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:51.873214  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:51.942299  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:51.942311  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:51.942323  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.519031  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:54.529427  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:54.529497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:54.558708  488914 cri.go:89] found id: ""
	I1202 21:46:54.558722  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.558729  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:54.558735  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:54.558796  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:54.583135  488914 cri.go:89] found id: ""
	I1202 21:46:54.583148  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.583155  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:54.583160  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:54.583221  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:54.609361  488914 cri.go:89] found id: ""
	I1202 21:46:54.609382  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.609390  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:54.609396  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:54.609461  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:54.637663  488914 cri.go:89] found id: ""
	I1202 21:46:54.637677  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.637683  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:54.637691  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:54.637748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:54.666901  488914 cri.go:89] found id: ""
	I1202 21:46:54.666915  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.666922  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:54.666927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:54.666987  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:54.695329  488914 cri.go:89] found id: ""
	I1202 21:46:54.695343  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.695350  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:54.695355  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:54.695413  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:54.724947  488914 cri.go:89] found id: ""
	I1202 21:46:54.724961  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.724967  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:54.724975  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:54.724986  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:54.742963  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:54.742980  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:54.810513  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:54.810523  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:54.810534  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.883552  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:54.883571  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:54.911389  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:54.911406  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.481762  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:57.492870  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:57.492930  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:57.517199  488914 cri.go:89] found id: ""
	I1202 21:46:57.517213  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.517220  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:57.517225  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:57.517292  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:57.543039  488914 cri.go:89] found id: ""
	I1202 21:46:57.543053  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.543060  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:57.543066  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:57.543130  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:57.567509  488914 cri.go:89] found id: ""
	I1202 21:46:57.567524  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.567530  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:57.567536  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:57.567597  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:57.593052  488914 cri.go:89] found id: ""
	I1202 21:46:57.593074  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.593081  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:57.593087  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:57.593151  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:57.618537  488914 cri.go:89] found id: ""
	I1202 21:46:57.618551  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.618558  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:57.618563  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:57.618626  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:57.645917  488914 cri.go:89] found id: ""
	I1202 21:46:57.645931  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.645938  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:57.645943  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:57.646003  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:57.673325  488914 cri.go:89] found id: ""
	I1202 21:46:57.673338  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.673353  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:57.673362  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:57.673378  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:57.748284  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:57.748294  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:57.748305  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:57.828296  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:57.828314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:57.855830  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:57.855846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.921121  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:57.921140  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:00.436836  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:00.448366  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:00.448436  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:00.478939  488914 cri.go:89] found id: ""
	I1202 21:47:00.478953  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.478960  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:00.478969  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:00.479059  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:00.505959  488914 cri.go:89] found id: ""
	I1202 21:47:00.505974  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.505981  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:00.505986  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:00.506050  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:00.532568  488914 cri.go:89] found id: ""
	I1202 21:47:00.532584  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.532597  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:00.532602  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:00.532667  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:00.561666  488914 cri.go:89] found id: ""
	I1202 21:47:00.561680  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.561687  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:00.561692  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:00.561753  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:00.588051  488914 cri.go:89] found id: ""
	I1202 21:47:00.588065  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.588072  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:00.588078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:00.588139  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:00.612422  488914 cri.go:89] found id: ""
	I1202 21:47:00.612437  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.612443  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:00.612449  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:00.612513  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:00.642069  488914 cri.go:89] found id: ""
	I1202 21:47:00.642082  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.642089  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:00.642097  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:00.642108  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:00.727511  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:00.727520  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:00.727531  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:00.803650  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:00.803671  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:00.832608  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:00.832624  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:00.900692  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:00.900713  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.417333  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:03.427135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:03.427205  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:03.451551  488914 cri.go:89] found id: ""
	I1202 21:47:03.451566  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.451573  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:03.451578  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:03.451635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:03.476736  488914 cri.go:89] found id: ""
	I1202 21:47:03.476750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.476757  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:03.476763  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:03.476825  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:03.501736  488914 cri.go:89] found id: ""
	I1202 21:47:03.501750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.501756  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:03.501761  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:03.501820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:03.527339  488914 cri.go:89] found id: ""
	I1202 21:47:03.527353  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.527360  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:03.527365  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:03.527427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:03.552910  488914 cri.go:89] found id: ""
	I1202 21:47:03.552923  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.552930  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:03.552936  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:03.552994  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:03.578110  488914 cri.go:89] found id: ""
	I1202 21:47:03.578124  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.578130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:03.578135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:03.578194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:03.603194  488914 cri.go:89] found id: ""
	I1202 21:47:03.603208  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.603215  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:03.603223  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:03.603233  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:03.688154  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:03.688174  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:03.725392  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:03.725408  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:03.791852  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:03.791873  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.807065  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:03.807080  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:03.882666  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.384350  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:06.394676  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:06.394749  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:06.423508  488914 cri.go:89] found id: ""
	I1202 21:47:06.423523  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.423530  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:06.423536  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:06.423595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:06.449675  488914 cri.go:89] found id: ""
	I1202 21:47:06.449689  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.449696  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:06.449701  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:06.449762  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:06.480053  488914 cri.go:89] found id: ""
	I1202 21:47:06.480066  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.480073  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:06.480078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:06.480140  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:06.508415  488914 cri.go:89] found id: ""
	I1202 21:47:06.508428  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.508435  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:06.508440  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:06.508498  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:06.533743  488914 cri.go:89] found id: ""
	I1202 21:47:06.533756  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.533763  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:06.533776  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:06.533836  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:06.558457  488914 cri.go:89] found id: ""
	I1202 21:47:06.558472  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.558479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:06.558484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:06.558548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:06.585312  488914 cri.go:89] found id: ""
	I1202 21:47:06.585326  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.585333  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:06.585341  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:06.585352  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:06.600648  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:06.600665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:06.677036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.677046  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:06.677058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:06.757223  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:06.757244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:06.785439  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:06.785455  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.357941  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:09.369144  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:09.369207  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:09.398056  488914 cri.go:89] found id: ""
	I1202 21:47:09.398070  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.398077  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:09.398083  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:09.398143  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:09.424606  488914 cri.go:89] found id: ""
	I1202 21:47:09.424620  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.424628  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:09.424633  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:09.424694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:09.451520  488914 cri.go:89] found id: ""
	I1202 21:47:09.451535  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.451542  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:09.451547  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:09.451607  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:09.477315  488914 cri.go:89] found id: ""
	I1202 21:47:09.477330  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.477337  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:09.477344  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:09.477399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:09.503654  488914 cri.go:89] found id: ""
	I1202 21:47:09.503668  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.503675  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:09.503680  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:09.503750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:09.529545  488914 cri.go:89] found id: ""
	I1202 21:47:09.529558  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.529565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:09.529571  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:09.529629  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:09.554726  488914 cri.go:89] found id: ""
	I1202 21:47:09.554740  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.554747  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:09.554754  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:09.554767  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.620273  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:09.620293  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:09.635655  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:09.635672  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:09.720524  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:09.720534  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:09.720544  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:09.800379  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:09.800400  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:12.331221  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:12.341899  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:12.341957  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:12.369642  488914 cri.go:89] found id: ""
	I1202 21:47:12.369656  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.369663  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:12.369668  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:12.369729  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:12.395917  488914 cri.go:89] found id: ""
	I1202 21:47:12.395930  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.395938  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:12.395943  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:12.396015  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:12.422817  488914 cri.go:89] found id: ""
	I1202 21:47:12.422831  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.422838  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:12.422843  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:12.422903  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:12.451973  488914 cri.go:89] found id: ""
	I1202 21:47:12.451986  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.451993  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:12.451998  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:12.452057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:12.477543  488914 cri.go:89] found id: ""
	I1202 21:47:12.477557  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.477564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:12.477569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:12.477627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:12.504941  488914 cri.go:89] found id: ""
	I1202 21:47:12.504954  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.504961  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:12.504967  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:12.505025  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:12.530800  488914 cri.go:89] found id: ""
	I1202 21:47:12.530821  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.530828  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:12.530836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:12.530846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:12.596910  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:12.596929  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:12.612316  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:12.612333  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:12.684014  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:12.684025  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:12.684039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:12.771749  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:12.771771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:15.304325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:15.315385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:15.315451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:15.341411  488914 cri.go:89] found id: ""
	I1202 21:47:15.341427  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.341434  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:15.341439  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:15.341501  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:15.366798  488914 cri.go:89] found id: ""
	I1202 21:47:15.366811  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.366818  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:15.366824  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:15.366884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:15.391138  488914 cri.go:89] found id: ""
	I1202 21:47:15.391152  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.391159  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:15.391164  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:15.391226  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:15.415514  488914 cri.go:89] found id: ""
	I1202 21:47:15.415528  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.415535  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:15.415540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:15.415595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:15.440750  488914 cri.go:89] found id: ""
	I1202 21:47:15.440764  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.440771  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:15.440777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:15.440839  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:15.469806  488914 cri.go:89] found id: ""
	I1202 21:47:15.469820  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.469827  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:15.469833  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:15.469891  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:15.497648  488914 cri.go:89] found id: ""
	I1202 21:47:15.497661  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.497668  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:15.497675  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:15.497687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:15.567654  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:15.567679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:15.582770  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:15.582785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:15.647132  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:15.647143  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:15.647154  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:15.740463  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:15.740492  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.270232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:18.280720  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:18.280782  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:18.305710  488914 cri.go:89] found id: ""
	I1202 21:47:18.305724  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.305731  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:18.305736  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:18.305793  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:18.329526  488914 cri.go:89] found id: ""
	I1202 21:47:18.329539  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.329545  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:18.329550  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:18.329606  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:18.355166  488914 cri.go:89] found id: ""
	I1202 21:47:18.355195  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.355202  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:18.355207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:18.355275  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:18.381992  488914 cri.go:89] found id: ""
	I1202 21:47:18.382006  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.382013  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:18.382018  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:18.382080  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:18.410268  488914 cri.go:89] found id: ""
	I1202 21:47:18.410283  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.410290  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:18.410296  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:18.410354  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:18.434607  488914 cri.go:89] found id: ""
	I1202 21:47:18.434620  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.434627  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:18.434632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:18.434689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:18.460092  488914 cri.go:89] found id: ""
	I1202 21:47:18.460106  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.460112  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:18.460120  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:18.460130  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:18.525571  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:18.525580  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:18.525591  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:18.601752  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:18.601776  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.631242  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:18.631258  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:18.706458  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:18.706478  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.222232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:21.232120  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:21.232178  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:21.257057  488914 cri.go:89] found id: ""
	I1202 21:47:21.257071  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.257078  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:21.257089  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:21.257145  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:21.281739  488914 cri.go:89] found id: ""
	I1202 21:47:21.281752  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.281759  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:21.281764  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:21.281820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:21.306878  488914 cri.go:89] found id: ""
	I1202 21:47:21.306892  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.306899  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:21.306905  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:21.306959  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:21.332327  488914 cri.go:89] found id: ""
	I1202 21:47:21.332340  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.332347  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:21.332352  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:21.332408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:21.356717  488914 cri.go:89] found id: ""
	I1202 21:47:21.356730  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.356737  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:21.356742  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:21.356799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:21.380787  488914 cri.go:89] found id: ""
	I1202 21:47:21.380801  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.380807  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:21.380813  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:21.380867  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:21.405984  488914 cri.go:89] found id: ""
	I1202 21:47:21.405998  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.406005  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:21.406013  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:21.406023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:21.438420  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:21.438435  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:21.503149  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:21.503170  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.518755  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:21.518771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:21.584415  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:21.584425  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:21.584437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.161915  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:24.172338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:24.172401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:24.197081  488914 cri.go:89] found id: ""
	I1202 21:47:24.197095  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.197102  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:24.197108  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:24.197166  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:24.222792  488914 cri.go:89] found id: ""
	I1202 21:47:24.222806  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.222827  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:24.222833  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:24.222898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:24.248463  488914 cri.go:89] found id: ""
	I1202 21:47:24.248486  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.248495  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:24.248500  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:24.248561  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:24.282539  488914 cri.go:89] found id: ""
	I1202 21:47:24.282554  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.282561  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:24.282567  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:24.282636  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:24.308071  488914 cri.go:89] found id: ""
	I1202 21:47:24.308086  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.308093  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:24.308098  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:24.308165  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:24.333666  488914 cri.go:89] found id: ""
	I1202 21:47:24.333689  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.333696  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:24.333702  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:24.333769  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:24.363212  488914 cri.go:89] found id: ""
	I1202 21:47:24.363226  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.363233  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:24.363254  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:24.363265  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:24.428642  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:24.428664  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:24.444347  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:24.444363  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:24.510036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:24.510047  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:24.510058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.585705  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:24.585726  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:27.116827  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:27.127233  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:27.127299  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:27.156311  488914 cri.go:89] found id: ""
	I1202 21:47:27.156325  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.156332  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:27.156337  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:27.156401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:27.180597  488914 cri.go:89] found id: ""
	I1202 21:47:27.180611  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.180617  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:27.180623  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:27.180682  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:27.205333  488914 cri.go:89] found id: ""
	I1202 21:47:27.205347  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.205354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:27.205359  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:27.205417  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:27.231165  488914 cri.go:89] found id: ""
	I1202 21:47:27.231179  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.231186  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:27.231192  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:27.231251  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:27.260640  488914 cri.go:89] found id: ""
	I1202 21:47:27.260654  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.260662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:27.260667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:27.260732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:27.286552  488914 cri.go:89] found id: ""
	I1202 21:47:27.286566  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.286573  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:27.286578  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:27.286637  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:27.311590  488914 cri.go:89] found id: ""
	I1202 21:47:27.311604  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.311611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:27.311619  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:27.311630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:27.376291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:27.376311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:27.391299  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:27.391314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:27.452046  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:27.452056  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:27.452067  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:27.527099  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:27.527119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.055495  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:30.067197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:30.067272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:30.093385  488914 cri.go:89] found id: ""
	I1202 21:47:30.093400  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.093407  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:30.093413  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:30.093475  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:30.120468  488914 cri.go:89] found id: ""
	I1202 21:47:30.120482  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.120490  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:30.120495  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:30.120558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:30.147744  488914 cri.go:89] found id: ""
	I1202 21:47:30.147759  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.147767  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:30.147772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:30.147838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:30.173628  488914 cri.go:89] found id: ""
	I1202 21:47:30.173650  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.173658  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:30.173664  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:30.173742  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:30.201952  488914 cri.go:89] found id: ""
	I1202 21:47:30.201992  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.202001  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:30.202007  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:30.202075  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:30.228366  488914 cri.go:89] found id: ""
	I1202 21:47:30.228380  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.228387  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:30.228399  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:30.228468  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:30.254412  488914 cri.go:89] found id: ""
	I1202 21:47:30.254426  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.254434  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:30.254442  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:30.254453  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:30.330454  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:30.330474  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.364243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:30.364259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:30.429823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:30.429841  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:30.445036  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:30.445058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:30.506029  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.006821  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:33.017853  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:33.017924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:33.043314  488914 cri.go:89] found id: ""
	I1202 21:47:33.043328  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.043335  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:33.043343  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:33.043402  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:33.068806  488914 cri.go:89] found id: ""
	I1202 21:47:33.068820  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.068826  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:33.068831  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:33.068889  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:33.097822  488914 cri.go:89] found id: ""
	I1202 21:47:33.097835  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.097842  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:33.097847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:33.097905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:33.123154  488914 cri.go:89] found id: ""
	I1202 21:47:33.123168  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.123176  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:33.123181  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:33.123240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:33.148284  488914 cri.go:89] found id: ""
	I1202 21:47:33.148298  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.148305  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:33.148310  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:33.148369  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:33.173434  488914 cri.go:89] found id: ""
	I1202 21:47:33.173448  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.173454  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:33.173460  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:33.173519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:33.198619  488914 cri.go:89] found id: ""
	I1202 21:47:33.198633  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.198640  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:33.198647  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:33.198662  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:33.263426  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:33.263446  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:33.279026  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:33.279042  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:33.339351  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.339361  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:33.339372  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:33.418569  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:33.418588  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:35.951124  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:35.962387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:35.962491  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:35.989088  488914 cri.go:89] found id: ""
	I1202 21:47:35.989102  488914 logs.go:282] 0 containers: []
	W1202 21:47:35.989109  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:35.989115  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:35.989176  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:36.017461  488914 cri.go:89] found id: ""
	I1202 21:47:36.017477  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.017484  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:36.017490  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:36.017614  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:36.046790  488914 cri.go:89] found id: ""
	I1202 21:47:36.046805  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.046812  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:36.046817  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:36.046875  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:36.073683  488914 cri.go:89] found id: ""
	I1202 21:47:36.073697  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.073704  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:36.073710  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:36.073767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:36.101900  488914 cri.go:89] found id: ""
	I1202 21:47:36.101914  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.101921  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:36.101926  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:36.101985  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:36.130435  488914 cri.go:89] found id: ""
	I1202 21:47:36.130449  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.130456  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:36.130462  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:36.130524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:36.157134  488914 cri.go:89] found id: ""
	I1202 21:47:36.157148  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.157155  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:36.157163  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:36.157173  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:36.221900  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:36.221919  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:36.237051  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:36.237068  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:36.299876  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:36.299886  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:36.299910  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:36.374213  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:36.374232  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:38.902545  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:38.913357  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:38.913415  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:38.944543  488914 cri.go:89] found id: ""
	I1202 21:47:38.944557  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.944563  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:38.944569  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:38.944627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:38.975916  488914 cri.go:89] found id: ""
	I1202 21:47:38.975930  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.975937  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:38.975942  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:38.976001  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:39.009795  488914 cri.go:89] found id: ""
	I1202 21:47:39.009810  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.009817  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:39.009823  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:39.009886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:39.034688  488914 cri.go:89] found id: ""
	I1202 21:47:39.034718  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.034726  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:39.034732  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:39.034805  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:39.059667  488914 cri.go:89] found id: ""
	I1202 21:47:39.059693  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.059701  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:39.059706  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:39.059767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:39.085837  488914 cri.go:89] found id: ""
	I1202 21:47:39.085851  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.085868  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:39.085873  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:39.085941  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:39.111280  488914 cri.go:89] found id: ""
	I1202 21:47:39.111295  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.111302  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:39.111310  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:39.111320  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:39.175646  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:39.175668  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:39.190971  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:39.190987  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:39.258563  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:39.258573  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:39.258584  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:39.333779  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:39.333798  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:41.863817  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:41.873822  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:41.873882  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:41.899560  488914 cri.go:89] found id: ""
	I1202 21:47:41.899585  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.899592  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:41.899598  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:41.899663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:41.937866  488914 cri.go:89] found id: ""
	I1202 21:47:41.937880  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.937887  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:41.937892  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:41.937960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:41.971862  488914 cri.go:89] found id: ""
	I1202 21:47:41.971876  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.971901  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:41.971907  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:41.971975  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:42.010639  488914 cri.go:89] found id: ""
	I1202 21:47:42.010655  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.010663  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:42.010695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:42.010778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:42.040775  488914 cri.go:89] found id: ""
	I1202 21:47:42.040790  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.040800  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:42.040805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:42.040881  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:42.072124  488914 cri.go:89] found id: ""
	I1202 21:47:42.072139  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.072149  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:42.072175  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:42.072252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:42.105424  488914 cri.go:89] found id: ""
	I1202 21:47:42.105439  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.105447  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:42.105456  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:42.105467  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:42.175007  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:42.175032  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:42.194759  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:42.194785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:42.271235  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:42.271247  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:42.271260  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:42.360263  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:42.360296  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:44.892475  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:44.902425  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:44.902484  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:44.929930  488914 cri.go:89] found id: ""
	I1202 21:47:44.929944  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.929952  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:44.929957  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:44.930017  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:44.959205  488914 cri.go:89] found id: ""
	I1202 21:47:44.959219  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.959225  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:44.959231  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:44.959288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:44.991335  488914 cri.go:89] found id: ""
	I1202 21:47:44.991350  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.991357  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:44.991362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:44.991437  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:45.047326  488914 cri.go:89] found id: ""
	I1202 21:47:45.047342  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.047350  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:45.047358  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:45.047440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:45.110770  488914 cri.go:89] found id: ""
	I1202 21:47:45.110787  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.110796  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:45.110803  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:45.110872  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:45.147274  488914 cri.go:89] found id: ""
	I1202 21:47:45.147290  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.147298  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:45.147304  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:45.147372  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:45.230398  488914 cri.go:89] found id: ""
	I1202 21:47:45.230413  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.230421  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:45.230437  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:45.230457  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:45.315457  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:45.315469  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:45.315479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:45.391401  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:45.391421  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:45.422183  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:45.422200  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:45.491250  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:45.491269  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.007522  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:48.019509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:48.019579  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:48.047045  488914 cri.go:89] found id: ""
	I1202 21:47:48.047059  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.047066  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:48.047072  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:48.047133  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:48.073355  488914 cri.go:89] found id: ""
	I1202 21:47:48.073370  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.073377  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:48.073383  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:48.073443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:48.101623  488914 cri.go:89] found id: ""
	I1202 21:47:48.101640  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.101653  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:48.101658  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:48.101728  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:48.128708  488914 cri.go:89] found id: ""
	I1202 21:47:48.128722  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.128729  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:48.128734  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:48.128795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:48.154337  488914 cri.go:89] found id: ""
	I1202 21:47:48.154352  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.154359  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:48.154364  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:48.154426  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:48.181724  488914 cri.go:89] found id: ""
	I1202 21:47:48.181739  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.181746  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:48.181752  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:48.181810  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:48.207628  488914 cri.go:89] found id: ""
	I1202 21:47:48.207641  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.207648  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:48.207655  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:48.207665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:48.273678  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:48.273699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.289393  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:48.289410  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:48.353116  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:48.353126  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:48.353138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:48.429785  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:48.429809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:50.961028  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:50.971337  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:50.971408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:51.004925  488914 cri.go:89] found id: ""
	I1202 21:47:51.004941  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.004949  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:51.004956  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:51.005023  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:51.033852  488914 cri.go:89] found id: ""
	I1202 21:47:51.033866  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.033873  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:51.033879  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:51.033951  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:51.065370  488914 cri.go:89] found id: ""
	I1202 21:47:51.065384  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.065392  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:51.065397  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:51.065454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:51.091797  488914 cri.go:89] found id: ""
	I1202 21:47:51.091811  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.091819  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:51.091824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:51.091886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:51.118245  488914 cri.go:89] found id: ""
	I1202 21:47:51.118260  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.118267  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:51.118273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:51.118350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:51.144813  488914 cri.go:89] found id: ""
	I1202 21:47:51.144828  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.144835  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:51.144841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:51.144898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:51.170591  488914 cri.go:89] found id: ""
	I1202 21:47:51.170605  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.170622  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:51.170630  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:51.170641  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:51.201061  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:51.201078  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:51.268903  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:51.268922  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:51.286516  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:51.286532  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:51.360635  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:51.360647  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:51.360658  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:53.937801  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:53.951326  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:53.951403  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:53.981411  488914 cri.go:89] found id: ""
	I1202 21:47:53.981424  488914 logs.go:282] 0 containers: []
	W1202 21:47:53.981431  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:53.981444  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:53.981504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:54.019553  488914 cri.go:89] found id: ""
	I1202 21:47:54.019568  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.019576  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:54.019581  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:54.019641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:54.045870  488914 cri.go:89] found id: ""
	I1202 21:47:54.045884  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.045891  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:54.045896  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:54.045960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:54.072428  488914 cri.go:89] found id: ""
	I1202 21:47:54.072443  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.072450  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:54.072455  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:54.072519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:54.098413  488914 cri.go:89] found id: ""
	I1202 21:47:54.098427  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.098434  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:54.098439  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:54.098497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:54.124502  488914 cri.go:89] found id: ""
	I1202 21:47:54.124517  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.124524  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:54.124529  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:54.124589  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:54.151244  488914 cri.go:89] found id: ""
	I1202 21:47:54.151258  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.151265  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:54.151273  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:54.151284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:54.213677  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:54.213688  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:54.213700  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:54.289814  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:54.289835  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:54.319415  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:54.319432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:54.385725  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:54.385745  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:56.902920  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:56.915363  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:56.915439  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:56.942569  488914 cri.go:89] found id: ""
	I1202 21:47:56.942583  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.942590  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:56.942596  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:56.942655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:56.975362  488914 cri.go:89] found id: ""
	I1202 21:47:56.975384  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.975391  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:56.975397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:56.975456  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:57.006861  488914 cri.go:89] found id: ""
	I1202 21:47:57.006877  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.006884  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:57.006890  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:57.006958  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:57.033667  488914 cri.go:89] found id: ""
	I1202 21:47:57.033682  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.033689  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:57.033695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:57.033751  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:57.059458  488914 cri.go:89] found id: ""
	I1202 21:47:57.059472  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.059479  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:57.059484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:57.059544  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:57.086098  488914 cri.go:89] found id: ""
	I1202 21:47:57.086112  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.086130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:57.086136  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:57.086206  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:57.112732  488914 cri.go:89] found id: ""
	I1202 21:47:57.112747  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.112754  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:57.112762  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:57.112773  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:57.141211  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:57.141226  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:57.210823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:57.210842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:57.226149  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:57.226166  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:57.287720  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:57.287730  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:57.287742  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:59.865507  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:59.875824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:59.875886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:59.901721  488914 cri.go:89] found id: ""
	I1202 21:47:59.901735  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.901741  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:59.901747  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:59.901834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:59.938763  488914 cri.go:89] found id: ""
	I1202 21:47:59.938777  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.938784  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:59.938789  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:59.938844  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:59.968613  488914 cri.go:89] found id: ""
	I1202 21:47:59.968627  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.968634  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:59.968639  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:59.968696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:00.011145  488914 cri.go:89] found id: ""
	I1202 21:48:00.011162  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.011172  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:00.011179  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:00.011248  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:00.128636  488914 cri.go:89] found id: ""
	I1202 21:48:00.128653  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.128662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:00.128668  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:00.128743  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:00.191602  488914 cri.go:89] found id: ""
	I1202 21:48:00.191633  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.191642  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:00.191651  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:00.191735  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:00.286597  488914 cri.go:89] found id: ""
	I1202 21:48:00.286618  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.286626  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:00.286635  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:00.286657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:00.393972  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:00.394009  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:00.425438  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:00.425462  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:00.522799  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:00.522810  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:00.522822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:00.603332  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:00.603356  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.142041  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:03.152666  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:03.152730  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:03.179575  488914 cri.go:89] found id: ""
	I1202 21:48:03.179589  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.179596  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:03.179601  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:03.179666  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:03.208278  488914 cri.go:89] found id: ""
	I1202 21:48:03.208293  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.208300  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:03.208305  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:03.208365  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:03.237068  488914 cri.go:89] found id: ""
	I1202 21:48:03.237081  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.237088  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:03.237093  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:03.237150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:03.262185  488914 cri.go:89] found id: ""
	I1202 21:48:03.262199  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.262206  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:03.262212  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:03.262270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:03.287056  488914 cri.go:89] found id: ""
	I1202 21:48:03.287076  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.287082  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:03.287088  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:03.287150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:03.312745  488914 cri.go:89] found id: ""
	I1202 21:48:03.312759  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.312766  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:03.312774  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:03.312831  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:03.337493  488914 cri.go:89] found id: ""
	I1202 21:48:03.337507  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.337514  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:03.337522  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:03.337535  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:03.398946  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:03.398957  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:03.398969  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:03.475063  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:03.475083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.502836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:03.502852  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:03.569966  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:03.569985  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.085423  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:06.096220  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:06.096284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:06.124362  488914 cri.go:89] found id: ""
	I1202 21:48:06.124378  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.124384  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:06.124392  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:06.124451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:06.150807  488914 cri.go:89] found id: ""
	I1202 21:48:06.150822  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.150829  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:06.150835  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:06.150896  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:06.177096  488914 cri.go:89] found id: ""
	I1202 21:48:06.177110  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.177117  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:06.177122  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:06.177189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:06.202670  488914 cri.go:89] found id: ""
	I1202 21:48:06.202684  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.202691  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:06.202697  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:06.202760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:06.227599  488914 cri.go:89] found id: ""
	I1202 21:48:06.227614  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.227626  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:06.227632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:06.227692  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:06.252361  488914 cri.go:89] found id: ""
	I1202 21:48:06.252375  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.252381  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:06.252387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:06.252443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:06.278301  488914 cri.go:89] found id: ""
	I1202 21:48:06.278315  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.278323  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:06.278331  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:06.278341  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:06.344608  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:06.344629  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.359909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:06.359925  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:06.427972  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:06.427982  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:06.427993  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:06.503390  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:06.503409  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.032284  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:09.043491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:09.043554  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:09.073343  488914 cri.go:89] found id: ""
	I1202 21:48:09.073358  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.073365  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:09.073371  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:09.073438  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:09.106311  488914 cri.go:89] found id: ""
	I1202 21:48:09.106325  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.106332  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:09.106337  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:09.106400  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:09.137607  488914 cri.go:89] found id: ""
	I1202 21:48:09.137622  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.137630  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:09.137635  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:09.137696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:09.165465  488914 cri.go:89] found id: ""
	I1202 21:48:09.165479  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.165486  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:09.165491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:09.165553  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:09.191695  488914 cri.go:89] found id: ""
	I1202 21:48:09.191709  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.191715  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:09.191721  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:09.191778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:09.217199  488914 cri.go:89] found id: ""
	I1202 21:48:09.217213  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.217221  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:09.217227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:09.217284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:09.243947  488914 cri.go:89] found id: ""
	I1202 21:48:09.243961  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.243977  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:09.243985  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:09.243995  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:09.259022  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:09.259038  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:09.325462  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:09.325472  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:09.325483  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:09.404565  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:09.404586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.435844  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:09.435860  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:12.005527  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:12.017298  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:12.017364  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:12.043631  488914 cri.go:89] found id: ""
	I1202 21:48:12.043645  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.043652  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:12.043657  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:12.043717  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:12.072548  488914 cri.go:89] found id: ""
	I1202 21:48:12.072562  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.072569  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:12.072574  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:12.072634  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:12.097779  488914 cri.go:89] found id: ""
	I1202 21:48:12.097792  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.097799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:12.097806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:12.097861  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:12.122380  488914 cri.go:89] found id: ""
	I1202 21:48:12.122394  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.122400  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:12.122406  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:12.122462  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:12.147485  488914 cri.go:89] found id: ""
	I1202 21:48:12.147499  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.147506  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:12.147511  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:12.147569  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:12.172352  488914 cri.go:89] found id: ""
	I1202 21:48:12.172372  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.172379  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:12.172385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:12.172451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:12.197386  488914 cri.go:89] found id: ""
	I1202 21:48:12.197400  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.197406  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:12.197414  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:12.197425  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:12.212275  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:12.212291  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:12.283599  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:12.283609  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:12.283620  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:12.362146  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:12.362177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:12.394426  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:12.394452  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:14.959300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:14.969317  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:14.969378  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:14.995679  488914 cri.go:89] found id: ""
	I1202 21:48:14.995693  488914 logs.go:282] 0 containers: []
	W1202 21:48:14.995701  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:14.995706  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:14.995767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:15.039291  488914 cri.go:89] found id: ""
	I1202 21:48:15.039307  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.039316  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:15.039322  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:15.039440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:15.066778  488914 cri.go:89] found id: ""
	I1202 21:48:15.066793  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.066800  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:15.066806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:15.066866  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:15.096009  488914 cri.go:89] found id: ""
	I1202 21:48:15.096031  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.096039  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:15.096045  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:15.096109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:15.124965  488914 cri.go:89] found id: ""
	I1202 21:48:15.124980  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.124987  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:15.124992  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:15.125055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:15.151140  488914 cri.go:89] found id: ""
	I1202 21:48:15.151155  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.151162  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:15.151168  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:15.151225  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:15.180343  488914 cri.go:89] found id: ""
	I1202 21:48:15.180362  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.180369  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:15.180378  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:15.180389  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:15.245885  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:15.245905  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:15.261189  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:15.261204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:15.329096  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:15.329106  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:15.329119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:15.404768  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:15.404789  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:17.936657  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:17.948615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:17.948678  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:17.980274  488914 cri.go:89] found id: ""
	I1202 21:48:17.980288  488914 logs.go:282] 0 containers: []
	W1202 21:48:17.980295  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:17.980301  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:17.980358  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:18.009972  488914 cri.go:89] found id: ""
	I1202 21:48:18.009988  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.009995  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:18.010000  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:18.010068  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:18.037292  488914 cri.go:89] found id: ""
	I1202 21:48:18.037307  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.037314  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:18.037320  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:18.037389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:18.068010  488914 cri.go:89] found id: ""
	I1202 21:48:18.068025  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.068034  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:18.068039  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:18.068100  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:18.098519  488914 cri.go:89] found id: ""
	I1202 21:48:18.098537  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.098545  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:18.098552  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:18.098616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:18.125321  488914 cri.go:89] found id: ""
	I1202 21:48:18.125336  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.125343  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:18.125349  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:18.125408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:18.154110  488914 cri.go:89] found id: ""
	I1202 21:48:18.154124  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.154131  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:18.154139  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:18.154161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:18.186862  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:18.186879  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:18.252168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:18.252188  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:18.267297  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:18.267312  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:18.330969  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:18.330979  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:18.330989  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:20.906864  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:20.918719  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:20.918779  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:20.946664  488914 cri.go:89] found id: ""
	I1202 21:48:20.946681  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.946688  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:20.946694  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:20.946757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:20.973074  488914 cri.go:89] found id: ""
	I1202 21:48:20.973088  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.973095  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:20.973100  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:20.973160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:20.998478  488914 cri.go:89] found id: ""
	I1202 21:48:20.998495  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.998503  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:20.998509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:20.998582  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:21.033676  488914 cri.go:89] found id: ""
	I1202 21:48:21.033691  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.033708  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:21.033714  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:21.033773  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:21.059527  488914 cri.go:89] found id: ""
	I1202 21:48:21.059549  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.059557  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:21.059562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:21.059623  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:21.088534  488914 cri.go:89] found id: ""
	I1202 21:48:21.088548  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.088555  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:21.088562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:21.088618  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:21.114102  488914 cri.go:89] found id: ""
	I1202 21:48:21.114116  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.114123  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:21.114130  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:21.114141  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:21.176428  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:21.176438  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:21.176449  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:21.251600  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:21.251621  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:21.278584  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:21.278600  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:21.350258  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:21.350279  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:23.865709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:23.876050  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:23.876119  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:23.906000  488914 cri.go:89] found id: ""
	I1202 21:48:23.906014  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.906021  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:23.906027  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:23.906094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:23.934001  488914 cri.go:89] found id: ""
	I1202 21:48:23.934015  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.934022  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:23.934028  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:23.934088  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:23.969619  488914 cri.go:89] found id: ""
	I1202 21:48:23.969633  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.969640  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:23.969645  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:23.969710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:23.997123  488914 cri.go:89] found id: ""
	I1202 21:48:23.997137  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.997144  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:23.997149  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:23.997211  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:24.027561  488914 cri.go:89] found id: ""
	I1202 21:48:24.027576  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.027584  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:24.027590  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:24.027660  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:24.053543  488914 cri.go:89] found id: ""
	I1202 21:48:24.053558  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.053565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:24.053570  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:24.053641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:24.080080  488914 cri.go:89] found id: ""
	I1202 21:48:24.080094  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.080101  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:24.080109  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:24.080119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:24.147092  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:24.147112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:24.162650  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:24.162666  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:24.225019  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:24.225029  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:24.225039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:24.300286  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:24.300307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:26.831634  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:26.843079  488914 kubeadm.go:602] duration metric: took 4m3.730369294s to restartPrimaryControlPlane
	W1202 21:48:26.843152  488914 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:48:26.843233  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:48:27.259211  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:48:27.272350  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:48:27.280460  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:48:27.280517  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:48:27.288570  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:48:27.288578  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:48:27.288628  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:48:27.296654  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:48:27.296709  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:48:27.304086  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:48:27.311898  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:48:27.311953  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:48:27.319289  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.326825  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:48:27.326888  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.334620  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:48:27.342084  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:48:27.342139  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:48:27.349467  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:48:27.386582  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:48:27.386896  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:48:27.472364  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:48:27.472439  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:48:27.472489  488914 kubeadm.go:319] OS: Linux
	I1202 21:48:27.472545  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:48:27.472601  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:48:27.472644  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:48:27.472700  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:48:27.472753  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:48:27.472804  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:48:27.472859  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:48:27.472915  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:48:27.472973  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:48:27.543309  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:48:27.543431  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:48:27.543527  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:48:27.554036  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:48:27.559373  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:48:27.559468  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:48:27.559542  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:48:27.559629  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:48:27.559701  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:48:27.559787  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:48:27.559841  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:48:27.559915  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:48:27.559985  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:48:27.560076  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:48:27.560159  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:48:27.560210  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:48:27.560269  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:48:27.850282  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:48:28.505037  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:48:28.762985  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:48:28.951263  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:48:29.183372  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:48:29.184043  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:48:29.186561  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:48:29.189676  488914 out.go:252]   - Booting up control plane ...
	I1202 21:48:29.189765  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:48:29.189838  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:48:29.191619  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:48:29.207350  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:48:29.207778  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:48:29.215590  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:48:29.215853  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:48:29.216063  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:48:29.353309  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:48:29.353417  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:52:29.354218  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001230264s
	I1202 21:52:29.354245  488914 kubeadm.go:319] 
	I1202 21:52:29.354298  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:52:29.354329  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:52:29.354427  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:52:29.354432  488914 kubeadm.go:319] 
	I1202 21:52:29.354529  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:52:29.354559  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:52:29.354587  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:52:29.354590  488914 kubeadm.go:319] 
	I1202 21:52:29.358907  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:52:29.359370  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:52:29.359489  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:52:29.359719  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:52:29.359724  488914 kubeadm.go:319] 
	I1202 21:52:29.359816  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:52:29.359952  488914 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230264s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:52:29.360041  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:52:29.774288  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:52:29.786781  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:52:29.786832  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:52:29.794551  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:52:29.794562  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:52:29.794615  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:52:29.802140  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:52:29.802200  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:52:29.809778  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:52:29.817315  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:52:29.817375  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:52:29.824944  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.832581  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:52:29.832636  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.840105  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:52:29.848039  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:52:29.848102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:52:29.855571  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:52:29.895459  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:52:29.895508  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:52:29.966851  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:52:29.966918  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:52:29.966952  488914 kubeadm.go:319] OS: Linux
	I1202 21:52:29.967027  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:52:29.967074  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:52:29.967120  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:52:29.967166  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:52:29.967212  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:52:29.967259  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:52:29.967302  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:52:29.967348  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:52:29.967393  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:52:30.044273  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:52:30.044406  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:52:30.044512  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:52:30.059289  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:52:30.064606  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:52:30.064707  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:52:30.064778  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:52:30.064861  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:52:30.064927  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:52:30.065002  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:52:30.065061  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:52:30.065130  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:52:30.065197  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:52:30.065280  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:52:30.065358  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:52:30.065394  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:52:30.065457  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:52:30.391272  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:52:30.580061  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:52:30.892953  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:52:31.052311  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:52:31.356833  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:52:31.357398  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:52:31.360444  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:52:31.363666  488914 out.go:252]   - Booting up control plane ...
	I1202 21:52:31.363767  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:52:31.363843  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:52:31.364787  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:52:31.380952  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:52:31.381067  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:52:31.389182  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:52:31.389514  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:52:31.389769  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:52:31.510935  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:52:31.511077  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:56:31.511610  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001043188s
	I1202 21:56:31.511635  488914 kubeadm.go:319] 
	I1202 21:56:31.511691  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:56:31.511724  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:56:31.511828  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:56:31.511833  488914 kubeadm.go:319] 
	I1202 21:56:31.511936  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:56:31.511966  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:56:31.511996  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:56:31.511999  488914 kubeadm.go:319] 
	I1202 21:56:31.516147  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:56:31.516591  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:56:31.516707  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:56:31.516982  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:56:31.516989  488914 kubeadm.go:319] 
	I1202 21:56:31.517086  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:56:31.517154  488914 kubeadm.go:403] duration metric: took 12m8.4399317s to StartCluster
	I1202 21:56:31.517186  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:56:31.517279  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:56:31.545508  488914 cri.go:89] found id: ""
	I1202 21:56:31.545521  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.545528  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:56:31.545538  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:56:31.545593  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:56:31.573505  488914 cri.go:89] found id: ""
	I1202 21:56:31.573519  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.573526  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:56:31.573532  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:56:31.573594  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:56:31.598620  488914 cri.go:89] found id: ""
	I1202 21:56:31.598634  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.598642  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:56:31.598647  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:56:31.598718  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:56:31.624500  488914 cri.go:89] found id: ""
	I1202 21:56:31.624514  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.624522  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:56:31.624528  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:56:31.624590  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:56:31.650576  488914 cri.go:89] found id: ""
	I1202 21:56:31.650591  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.650598  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:56:31.650604  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:56:31.650665  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:56:31.677681  488914 cri.go:89] found id: ""
	I1202 21:56:31.677696  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.677703  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:56:31.677709  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:56:31.677772  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:56:31.702889  488914 cri.go:89] found id: ""
	I1202 21:56:31.702903  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.702910  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:56:31.702918  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:56:31.702928  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:56:31.769428  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:56:31.769447  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:56:31.784680  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:56:31.784696  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:56:31.848558  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:56:31.848570  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:56:31.848581  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:56:31.924323  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:56:31.924343  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:56:31.952600  488914 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:56:31.952640  488914 out.go:285] * 
	W1202 21:56:31.952744  488914 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.952799  488914 out.go:285] * 
	W1202 21:56:31.955203  488914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:56:31.960375  488914 out.go:203] 
	W1202 21:56:31.963105  488914 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.963144  488914 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:56:31.963163  488914 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:56:31.966130  488914 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.45283707Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=bf00db59-611c-44fb-b66b-5de338fe239d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486207629Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486338707Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486372447Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.31254149Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=02dfde09-63cb-48a9-bc75-2498ded8aebd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338777762Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338914322Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338952624Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364142306Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364305064Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364345213Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.448620533Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=2c172ded-5053-4702-8981-86fe65b3eb5a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473261763Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473491575Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473554164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502089674Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502268679Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502308638Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.270878698Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=6683c882-fed2-46df-a5c6-4c16ad59fbea name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300274442Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300423301Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300466198Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325738621Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325897843Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325952326Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:58:38.224240   23957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:38.224853   23957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:38.226373   23957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:38.226721   23957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:58:38.228162   23957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:58:38 up  3:40,  0 user,  load average: 0.39, 0.25, 0.33
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:58:35 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:36 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 02 21:58:36 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:36 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:36 functional-066896 kubelet[23847]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:36 functional-066896 kubelet[23847]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:36 functional-066896 kubelet[23847]: E1202 21:58:36.738864   23847 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:36 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:36 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:37 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 02 21:58:37 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:37 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:37 functional-066896 kubelet[23867]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:37 functional-066896 kubelet[23867]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:37 functional-066896 kubelet[23867]: E1202 21:58:37.471829   23867 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:37 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:37 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:58:38 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 02 21:58:38 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:38 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:58:38 functional-066896 kubelet[23956]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:38 functional-066896 kubelet[23956]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:58:38 functional-066896 kubelet[23956]: E1202 21:58:38.223304   23956 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:58:38 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:58:38 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (347.10109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:57:01.638927  447211 retry.go:31] will retry after 4.028579856s: Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:57:15.669208  447211 retry.go:31] will retry after 5.506035508s: Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:57:31.176556  447211 retry.go:31] will retry after 10.117493923s: Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:57:51.294671  447211 retry.go:31] will retry after 9.220475709s: Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:58:10.515910  447211 retry.go:31] will retry after 15.967313429s: Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (331.37678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (313.738157ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-066896 ssh findmnt -T /mount1                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh            │ functional-066896 ssh findmnt -T /mount2                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ ssh            │ functional-066896 ssh findmnt -T /mount3                                                                                                      │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ mount          │ -p functional-066896 --kill=true                                                                                                              │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ addons         │ functional-066896 addons list                                                                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ addons         │ functional-066896 addons list -o json                                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ service        │ functional-066896 service list                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service        │ functional-066896 service list -o json                                                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service        │ functional-066896 service --namespace=default --https --url hello-node                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service        │ functional-066896 service hello-node --url --format={{.IP}}                                                                                   │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ service        │ functional-066896 service hello-node --url                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start          │ -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start          │ -p functional-066896 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ start          │ -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-066896 --alsologtostderr -v=1                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ image          │ functional-066896 image ls --format short --alsologtostderr                                                                                   │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ image          │ functional-066896 image ls --format yaml --alsologtostderr                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ ssh            │ functional-066896 ssh pgrep buildkitd                                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │                     │
	│ image          │ functional-066896 image build -t localhost/my-image:functional-066896 testdata/build --alsologtostderr                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ image          │ functional-066896 image ls                                                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ image          │ functional-066896 image ls --format json --alsologtostderr                                                                                    │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ image          │ functional-066896 image ls --format table --alsologtostderr                                                                                   │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ update-context │ functional-066896 update-context --alsologtostderr -v=2                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ update-context │ functional-066896 update-context --alsologtostderr -v=2                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	│ update-context │ functional-066896 update-context --alsologtostderr -v=2                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:58 UTC │ 02 Dec 25 21:58 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:58:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:58:46.901861  507730 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:58:46.902020  507730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:46.902049  507730 out.go:374] Setting ErrFile to fd 2...
	I1202 21:58:46.902055  507730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:46.902463  507730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:58:46.902886  507730 out.go:368] Setting JSON to false
	I1202 21:58:46.903818  507730 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13255,"bootTime":1764699472,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:58:46.903890  507730 start.go:143] virtualization:  
	I1202 21:58:46.907131  507730 out.go:179] * [functional-066896] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 21:58:46.910758  507730 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:58:46.910832  507730 notify.go:221] Checking for updates...
	I1202 21:58:46.916328  507730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:58:46.919207  507730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:58:46.922097  507730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:58:46.924927  507730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:58:46.927693  507730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:58:46.931080  507730 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:58:46.931712  507730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:58:46.967128  507730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:58:46.967244  507730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:58:47.036134  507730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:47.026846878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:58:47.036254  507730 docker.go:319] overlay module found
	I1202 21:58:47.039414  507730 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 21:58:47.042260  507730 start.go:309] selected driver: docker
	I1202 21:58:47.042282  507730 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:58:47.042390  507730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:58:47.045971  507730 out.go:203] 
	W1202 21:58:47.048833  507730 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 21:58:47.051708  507730 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.45283707Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=bf00db59-611c-44fb-b66b-5de338fe239d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486207629Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486338707Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486372447Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.31254149Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=02dfde09-63cb-48a9-bc75-2498ded8aebd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338777762Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338914322Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338952624Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364142306Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364305064Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364345213Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.448620533Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=2c172ded-5053-4702-8981-86fe65b3eb5a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473261763Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473491575Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473554164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502089674Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502268679Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502308638Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.270878698Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=6683c882-fed2-46df-a5c6-4c16ad59fbea name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300274442Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300423301Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300466198Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325738621Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325897843Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325952326Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:00:58.137037   26012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 22:00:58.137787   26012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 22:00:58.139602   26012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 22:00:58.140304   26012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 22:00:58.142055   26012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:00:58 up  3:43,  0 user,  load average: 0.30, 0.31, 0.35
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:00:55 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:00:56 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1315.
	Dec 02 22:00:56 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:00:56 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:00:56 functional-066896 kubelet[25888]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:00:56 functional-066896 kubelet[25888]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:00:56 functional-066896 kubelet[25888]: E1202 22:00:56.456123   25888 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:00:56 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:00:56 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:00:57 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1316.
	Dec 02 22:00:57 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:00:57 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:00:57 functional-066896 kubelet[25907]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:00:57 functional-066896 kubelet[25907]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:00:57 functional-066896 kubelet[25907]: E1202 22:00:57.177373   25907 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:00:57 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:00:57 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:00:57 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1317.
	Dec 02 22:00:57 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:00:57 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:00:57 functional-066896 kubelet[25969]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:00:57 functional-066896 kubelet[25969]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:00:57 functional-066896 kubelet[25969]: E1202 22:00:57.973923   25969 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:00:57 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:00:57 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (316.310752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-066896 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-066896 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (77.010242ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-066896 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-066896
helpers_test.go:243: (dbg) docker inspect functional-066896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	        "Created": "2025-12-02T21:29:26.751342392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:29:26.806917516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/hosts",
	        "LogPath": "/var/lib/docker/containers/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4/861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4-json.log",
	        "Name": "/functional-066896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-066896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-066896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "861c5248e0ab55092ad202adb9c48c8199667bdfb9c24a7fdda5c6635e4fc6f4",
	                "LowerDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3249525af46317beb7010792443e19be324f069381ca352bed185d0921ea7695/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-066896",
	                "Source": "/var/lib/docker/volumes/functional-066896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-066896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-066896",
	                "name.minikube.sigs.k8s.io": "functional-066896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "404f76c1cdec8f6f0d913d0675a05a7e8a5b8348e0726b25ffe08e731c17d145",
	            "SandboxKey": "/var/run/docker/netns/404f76c1cdec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-066896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:a3:07:ce:c6:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bf8b8a6910a503fa231514930bbc4f76780e8ae9ac55d22aec9cb084fcdac2c",
	                    "EndpointID": "1b87d7ab37fe4d362d6b3878b660c9f6aecff4c98cb214a92a98dc9c4673f583",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-066896",
	                        "861c5248e0ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-066896 -n functional-066896: exit status 2 (319.266969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 logs -n 25: (1.053706112s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ functional-066896 kubectl -- --context functional-066896 get pods                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ start   │ -p functional-066896 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                  │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:44 UTC │                     │
	│ license │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ config  │ functional-066896 config unset cpus                                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ config  │ functional-066896 config get cpus                                                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ config  │ functional-066896 config set cpus 2                                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ config  │ functional-066896 config get cpus                                                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ config  │ functional-066896 config unset cpus                                                                                                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ config  │ functional-066896 config get cpus                                                                                                                         │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh sudo systemctl is-active docker                                                                                                     │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ tunnel  │ functional-066896 tunnel --alsologtostderr                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ tunnel  │ functional-066896 tunnel --alsologtostderr                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ ssh     │ functional-066896 ssh sudo systemctl is-active containerd                                                                                                 │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ tunnel  │ functional-066896 tunnel --alsologtostderr                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │                     │
	│ image   │ functional-066896 image load --daemon kicbase/echo-server:functional-066896 --alsologtostderr                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image ls                                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image load --daemon kicbase/echo-server:functional-066896 --alsologtostderr                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image ls                                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image load --daemon kicbase/echo-server:functional-066896 --alsologtostderr                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image ls                                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image save kicbase/echo-server:functional-066896 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image rm kicbase/echo-server:functional-066896 --alsologtostderr                                                                        │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image ls                                                                                                                                │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	│ image   │ functional-066896 image save --daemon kicbase/echo-server:functional-066896 --alsologtostderr                                                             │ functional-066896 │ jenkins │ v1.37.0 │ 02 Dec 25 21:56 UTC │ 02 Dec 25 21:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:44:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:44:17.650988  488914 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:44:17.651127  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651131  488914 out.go:374] Setting ErrFile to fd 2...
	I1202 21:44:17.651134  488914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:44:17.651388  488914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:44:17.651725  488914 out.go:368] Setting JSON to false
	I1202 21:44:17.652562  488914 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12386,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:44:17.652624  488914 start.go:143] virtualization:  
	I1202 21:44:17.655925  488914 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:44:17.658824  488914 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:44:17.658955  488914 notify.go:221] Checking for updates...
	I1202 21:44:17.664772  488914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:44:17.667672  488914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:44:17.670581  488914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:44:17.673492  488914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:44:17.676281  488914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:44:17.679520  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:17.679615  488914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:44:17.708368  488914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:44:17.708467  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.767956  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.759221256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.768046  488914 docker.go:319] overlay module found
	I1202 21:44:17.771104  488914 out.go:179] * Using the docker driver based on existing profile
	I1202 21:44:17.773889  488914 start.go:309] selected driver: docker
	I1202 21:44:17.773897  488914 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.773983  488914 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:44:17.774077  488914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:44:17.834934  488914 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:44:17.825868601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:44:17.835402  488914 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:44:17.835426  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:17.835482  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:17.835523  488914 start.go:353] cluster config:
	{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:17.838587  488914 out.go:179] * Starting "functional-066896" primary control-plane node in "functional-066896" cluster
	I1202 21:44:17.841458  488914 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:44:17.844370  488914 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:44:17.847200  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:17.847277  488914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:44:17.866587  488914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:44:17.866598  488914 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:44:17.909149  488914 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 21:44:18.073530  488914 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 21:44:18.073687  488914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/config.json ...
	I1202 21:44:18.073803  488914 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073909  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:44:18.073917  488914 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.617µs
	I1202 21:44:18.073927  488914 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:44:18.073937  488914 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:44:18.073939  488914 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073964  488914 start.go:360] acquireMachinesLock for functional-066896: {Name:mk267bbea9c27a359f02eb801882f7b85387ec92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.073980  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:44:18.073986  488914 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.935µs
	I1202 21:44:18.073991  488914 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074001  488914 start.go:364] duration metric: took 25.551µs to acquireMachinesLock for "functional-066896"
	I1202 21:44:18.074000  488914 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074014  488914 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:44:18.074021  488914 fix.go:54] fixHost starting: 
	I1202 21:44:18.074029  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:44:18.074034  488914 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.037µs
	I1202 21:44:18.074039  488914 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074056  488914 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074084  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:44:18.074089  488914 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 41.329µs
	I1202 21:44:18.074093  488914 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074101  488914 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074151  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:44:18.074156  488914 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 55.623µs
	I1202 21:44:18.074160  488914 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:44:18.074169  488914 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074193  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:44:18.074211  488914 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.457µs
	I1202 21:44:18.074217  488914 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:44:18.074232  488914 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074258  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:44:18.074262  488914 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 39.032µs
	I1202 21:44:18.074267  488914 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:44:18.074276  488914 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:44:18.074274  488914 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:44:18.074311  488914 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:44:18.074315  488914 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.174µs
	I1202 21:44:18.074320  488914 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:44:18.074327  488914 cache.go:87] Successfully saved all images to host disk.
	I1202 21:44:18.091506  488914 fix.go:112] recreateIfNeeded on functional-066896: state=Running err=<nil>
	W1202 21:44:18.091527  488914 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:44:18.096748  488914 out.go:252] * Updating the running docker "functional-066896" container ...
	I1202 21:44:18.096772  488914 machine.go:94] provisionDockerMachine start ...
	I1202 21:44:18.096874  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.114456  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.114786  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.114793  488914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:44:18.266794  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.266809  488914 ubuntu.go:182] provisioning hostname "functional-066896"
	I1202 21:44:18.266875  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.286274  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.286575  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.286589  488914 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-066896 && echo "functional-066896" | sudo tee /etc/hostname
	I1202 21:44:18.448160  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-066896
	
	I1202 21:44:18.448232  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:18.466449  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:18.466766  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:18.466781  488914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-066896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-066896/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-066896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:44:18.615365  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:44:18.615380  488914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 21:44:18.615404  488914 ubuntu.go:190] setting up certificates
	I1202 21:44:18.615412  488914 provision.go:84] configureAuth start
	I1202 21:44:18.615471  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:18.633069  488914 provision.go:143] copyHostCerts
	I1202 21:44:18.633141  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 21:44:18.633158  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 21:44:18.633234  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 21:44:18.633330  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 21:44:18.633334  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 21:44:18.633359  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 21:44:18.633406  488914 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 21:44:18.633410  488914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 21:44:18.633430  488914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 21:44:18.633475  488914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.functional-066896 san=[127.0.0.1 192.168.49.2 functional-066896 localhost minikube]
	I1202 21:44:19.174279  488914 provision.go:177] copyRemoteCerts
	I1202 21:44:19.174331  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:44:19.174370  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.190978  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.294889  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 21:44:19.312628  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:44:19.330566  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:44:19.347713  488914 provision.go:87] duration metric: took 732.278587ms to configureAuth
	I1202 21:44:19.347730  488914 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:44:19.347935  488914 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:44:19.348040  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.364877  488914 main.go:143] libmachine: Using SSH client type: native
	I1202 21:44:19.365168  488914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 21:44:19.365182  488914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 21:44:19.733535  488914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 21:44:19.733548  488914 machine.go:97] duration metric: took 1.636769982s to provisionDockerMachine
	I1202 21:44:19.733558  488914 start.go:293] postStartSetup for "functional-066896" (driver="docker")
	I1202 21:44:19.733570  488914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:44:19.733637  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:44:19.733700  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.752520  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:19.854929  488914 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:44:19.858053  488914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:44:19.858070  488914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:44:19.858080  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 21:44:19.858131  488914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 21:44:19.858206  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 21:44:19.858277  488914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts -> hosts in /etc/test/nested/copy/447211
	I1202 21:44:19.858317  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/447211
	I1202 21:44:19.865625  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:19.882511  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts --> /etc/test/nested/copy/447211/hosts (40 bytes)
	I1202 21:44:19.899291  488914 start.go:296] duration metric: took 165.718396ms for postStartSetup
	I1202 21:44:19.899374  488914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:44:19.899409  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:19.915689  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.016990  488914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:44:20.022912  488914 fix.go:56] duration metric: took 1.948885968s for fixHost
	I1202 21:44:20.022943  488914 start.go:83] releasing machines lock for "functional-066896", held for 1.948933476s
	I1202 21:44:20.023059  488914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-066896
	I1202 21:44:20.041984  488914 ssh_runner.go:195] Run: cat /version.json
	I1202 21:44:20.042007  488914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:44:20.042033  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.042071  488914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:44:20.064148  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.064737  488914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:44:20.168080  488914 ssh_runner.go:195] Run: systemctl --version
	I1202 21:44:20.290437  488914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 21:44:20.326220  488914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:44:20.331076  488914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:44:20.331137  488914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:44:20.338791  488914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:44:20.338805  488914 start.go:496] detecting cgroup driver to use...
	I1202 21:44:20.338835  488914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:44:20.338881  488914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 21:44:20.354128  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 21:44:20.367183  488914 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:44:20.367236  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:44:20.383031  488914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:44:20.396225  488914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:44:20.505938  488914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:44:20.631853  488914 docker.go:234] disabling docker service ...
	I1202 21:44:20.631909  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:44:20.647481  488914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:44:20.660948  488914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:44:20.779859  488914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:44:20.901936  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:44:20.922332  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:44:20.937696  488914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 21:44:20.937766  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.947525  488914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 21:44:20.947591  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.956868  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.966757  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.976111  488914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:44:20.984116  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:20.993108  488914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.003934  488914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 21:44:21.015041  488914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:44:21.023179  488914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:44:21.030977  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.150076  488914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 21:44:21.327555  488914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 21:44:21.327622  488914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 21:44:21.331404  488914 start.go:564] Will wait 60s for crictl version
	I1202 21:44:21.331471  488914 ssh_runner.go:195] Run: which crictl
	I1202 21:44:21.335016  488914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:44:21.359060  488914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 21:44:21.359133  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.387110  488914 ssh_runner.go:195] Run: crio --version
	I1202 21:44:21.420984  488914 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 21:44:21.423772  488914 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:44:21.440341  488914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:44:21.447237  488914 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:44:21.449900  488914 kubeadm.go:884] updating cluster {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:44:21.450046  488914 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 21:44:21.450110  488914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:44:21.483620  488914 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 21:44:21.483631  488914 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:44:21.483637  488914 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 21:44:21.483726  488914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-066896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:44:21.483815  488914 ssh_runner.go:195] Run: crio config
	I1202 21:44:21.540157  488914 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:44:21.540183  488914 cni.go:84] Creating CNI manager for ""
	I1202 21:44:21.540190  488914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:44:21.540200  488914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:44:21.540251  488914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-066896 NodeName:functional-066896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:44:21.540412  488914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-066896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:44:21.540486  488914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:44:21.551296  488914 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:44:21.551378  488914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:44:21.559159  488914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 21:44:21.572470  488914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:44:21.586886  488914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 21:44:21.600852  488914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:44:21.604702  488914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:44:21.760401  488914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:44:22.412975  488914 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896 for IP: 192.168.49.2
	I1202 21:44:22.412987  488914 certs.go:195] generating shared ca certs ...
	I1202 21:44:22.413002  488914 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:44:22.413155  488914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 21:44:22.413195  488914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 21:44:22.413201  488914 certs.go:257] generating profile certs ...
	I1202 21:44:22.413284  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.key
	I1202 21:44:22.413360  488914 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key.afad1c23
	I1202 21:44:22.413398  488914 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key
	I1202 21:44:22.413511  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 21:44:22.413543  488914 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 21:44:22.413552  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:44:22.413581  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 21:44:22.413604  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:44:22.413626  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 21:44:22.413674  488914 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 21:44:22.414299  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:44:22.434951  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:44:22.453111  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:44:22.472098  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 21:44:22.493256  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:44:22.511523  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:44:22.529485  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:44:22.547667  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 21:44:22.565085  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 21:44:22.583650  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:44:22.601678  488914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 21:44:22.619263  488914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:44:22.631918  488914 ssh_runner.go:195] Run: openssl version
	I1202 21:44:22.638008  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 21:44:22.646246  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.649963  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.650030  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 21:44:22.691947  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 21:44:22.699744  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 21:44:22.707750  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711346  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.711410  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 21:44:22.752553  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:44:22.760779  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:44:22.769102  488914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.772990  488914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.773054  488914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:44:22.817125  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:44:22.825521  488914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:44:22.829263  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:44:22.870268  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:44:22.912651  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:44:22.953793  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:44:22.994690  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:44:23.036128  488914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:44:23.077233  488914 kubeadm.go:401] StartCluster: {Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:44:23.077311  488914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 21:44:23.077384  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.104728  488914 cri.go:89] found id: ""
	I1202 21:44:23.104787  488914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:44:23.112693  488914 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:44:23.112702  488914 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:44:23.112754  488914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:44:23.120199  488914 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.120715  488914 kubeconfig.go:125] found "functional-066896" server: "https://192.168.49.2:8441"
	I1202 21:44:23.122004  488914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:44:23.129849  488914 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:29:46.719862797 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:44:21.596345133 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:44:23.129868  488914 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:44:23.129878  488914 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 21:44:23.129934  488914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:44:23.164567  488914 cri.go:89] found id: ""
	I1202 21:44:23.164629  488914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:44:23.192730  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:44:23.201193  488914 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5576 Dec  2 21:33 /etc/kubernetes/scheduler.conf
	
	I1202 21:44:23.201254  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:44:23.209100  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:44:23.217145  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.217201  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:44:23.224901  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.232713  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.232773  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:44:23.240473  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:44:23.248046  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:44:23.248102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:44:23.255508  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:44:23.263587  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:23.311842  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.167347  488914 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.855478015s)
	I1202 21:44:25.167416  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.367575  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.433420  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:44:25.478422  488914 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:44:25.478494  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:25.978693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.479461  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:26.978647  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.479295  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:27.979313  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.479548  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:28.979300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.478679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:29.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.479305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:30.979214  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.478682  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:31.979440  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.478676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:32.978971  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.478687  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:33.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.479399  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:34.978686  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.479541  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:35.979365  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.478985  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:36.978766  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.478652  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:37.979222  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.478642  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:38.979289  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.479367  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:39.978641  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.478896  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:40.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.479195  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:41.979035  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.478597  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:42.978688  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.478820  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:43.979413  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.478702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:44.979325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.478716  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:45.979514  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.479502  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:46.978679  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.479602  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:47.978676  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:48.978691  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:49.979208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.479262  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:50.978947  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.478848  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:51.979340  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.478943  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:52.979631  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.479208  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:53.978824  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.478692  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:54.978621  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.479381  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:55.978718  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:56.979217  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.479300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:57.979309  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.478661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:58.978590  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.478589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:44:59.979149  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.479524  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:00.979613  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:01.979556  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.479181  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:02.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.479560  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:03.979258  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.478693  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:04.979625  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.479483  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:05.979403  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.479145  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:06.979083  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.478795  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:07.979236  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.478753  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:08.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.479607  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:09.979523  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.479438  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:10.978717  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.478907  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:11.979407  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.478991  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:12.979216  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.479168  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:13.979304  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.479589  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:14.979207  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.478756  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:15.979408  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.479237  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:16.979186  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.478671  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:17.979155  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.478781  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:18.978702  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.478767  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:19.978709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.478610  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:20.979395  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.479136  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:21.978666  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.479565  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:22.978675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.478723  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:23.979164  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.478675  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:24.978579  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:25.479540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:25.479652  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:25.504711  488914 cri.go:89] found id: ""
	I1202 21:45:25.504725  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.504732  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:25.504738  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:25.504795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:25.529752  488914 cri.go:89] found id: ""
	I1202 21:45:25.529766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.529773  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:25.529778  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:25.529838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:25.555068  488914 cri.go:89] found id: ""
	I1202 21:45:25.555082  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.555089  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:25.555095  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:25.555154  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:25.583996  488914 cri.go:89] found id: ""
	I1202 21:45:25.584010  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.584017  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:25.584023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:25.584083  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:25.613039  488914 cri.go:89] found id: ""
	I1202 21:45:25.613053  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.613060  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:25.613065  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:25.613125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:25.638912  488914 cri.go:89] found id: ""
	I1202 21:45:25.638926  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.638933  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:25.638938  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:25.639016  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:25.663753  488914 cri.go:89] found id: ""
	I1202 21:45:25.663766  488914 logs.go:282] 0 containers: []
	W1202 21:45:25.663773  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:25.663781  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:25.663793  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:25.693023  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:25.693040  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:25.759763  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:25.759782  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:25.774658  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:25.774679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:25.838644  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:25.830527   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.831235   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.832835   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.833412   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:25.835218   11586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:25.838656  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:25.838667  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.417551  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:28.428847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:28.428924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:28.461391  488914 cri.go:89] found id: ""
	I1202 21:45:28.461406  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.461413  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:28.461418  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:28.461487  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:28.493536  488914 cri.go:89] found id: ""
	I1202 21:45:28.493549  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.493556  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:28.493561  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:28.493625  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:28.521334  488914 cri.go:89] found id: ""
	I1202 21:45:28.521347  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.521354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:28.521360  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:28.521429  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:28.546459  488914 cri.go:89] found id: ""
	I1202 21:45:28.546472  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.546479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:28.546484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:28.546558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:28.573310  488914 cri.go:89] found id: ""
	I1202 21:45:28.573325  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.573332  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:28.573338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:28.573398  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:28.603231  488914 cri.go:89] found id: ""
	I1202 21:45:28.603245  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.603252  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:28.603259  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:28.603339  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:28.628995  488914 cri.go:89] found id: ""
	I1202 21:45:28.629009  488914 logs.go:282] 0 containers: []
	W1202 21:45:28.629016  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:28.629024  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:28.629034  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:28.694293  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:28.694315  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:28.709309  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:28.709326  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:28.772742  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:28.764634   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.765346   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.766846   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.767546   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:28.769217   11680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:28.772763  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:28.772775  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:28.851065  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:28.851099  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.383921  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:31.394465  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:31.394529  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:31.432030  488914 cri.go:89] found id: ""
	I1202 21:45:31.432046  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.432053  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:31.432061  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:31.432122  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:31.469314  488914 cri.go:89] found id: ""
	I1202 21:45:31.469327  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.469334  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:31.469339  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:31.469399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:31.495701  488914 cri.go:89] found id: ""
	I1202 21:45:31.495715  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.495721  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:31.495726  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:31.495783  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:31.525459  488914 cri.go:89] found id: ""
	I1202 21:45:31.525472  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.525479  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:31.525484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:31.525548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:31.551543  488914 cri.go:89] found id: ""
	I1202 21:45:31.551557  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.551564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:31.551569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:31.551635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:31.576459  488914 cri.go:89] found id: ""
	I1202 21:45:31.576473  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.576479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:31.576485  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:31.576543  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:31.605711  488914 cri.go:89] found id: ""
	I1202 21:45:31.605726  488914 logs.go:282] 0 containers: []
	W1202 21:45:31.605733  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:31.605741  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:31.605752  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:31.637077  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:31.637094  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:31.704571  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:31.704592  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:31.719615  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:31.719640  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:31.784987  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:31.776784   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.777502   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779172   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.779783   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:31.781463   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:31.785007  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:31.785019  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.367127  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:34.377127  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:34.377203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:34.402736  488914 cri.go:89] found id: ""
	I1202 21:45:34.402750  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.402757  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:34.402769  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:34.402864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:34.443728  488914 cri.go:89] found id: ""
	I1202 21:45:34.443742  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.443749  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:34.443754  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:34.443815  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:34.479956  488914 cri.go:89] found id: ""
	I1202 21:45:34.479970  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.479985  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:34.479991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:34.480055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:34.508482  488914 cri.go:89] found id: ""
	I1202 21:45:34.508503  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.508510  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:34.508516  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:34.508573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:34.534801  488914 cri.go:89] found id: ""
	I1202 21:45:34.534814  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.534821  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:34.534826  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:34.534884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:34.559463  488914 cri.go:89] found id: ""
	I1202 21:45:34.559477  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.559484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:34.559490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:34.559551  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:34.584528  488914 cri.go:89] found id: ""
	I1202 21:45:34.584543  488914 logs.go:282] 0 containers: []
	W1202 21:45:34.584550  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:34.584557  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:34.584568  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:34.651241  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:34.651261  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:34.666228  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:34.666244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:34.728086  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:34.720557   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.720952   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.722671   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.723025   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:34.724562   11893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:34.728108  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:34.728120  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:34.804348  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:34.804369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:37.332022  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:37.341829  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:37.341888  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:37.366064  488914 cri.go:89] found id: ""
	I1202 21:45:37.366078  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.366085  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:37.366090  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:37.366147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:37.395570  488914 cri.go:89] found id: ""
	I1202 21:45:37.395584  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.395590  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:37.395595  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:37.395663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:37.429125  488914 cri.go:89] found id: ""
	I1202 21:45:37.429140  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.429147  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:37.429161  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:37.429218  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:37.462030  488914 cri.go:89] found id: ""
	I1202 21:45:37.462054  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.462062  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:37.462080  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:37.462152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:37.490229  488914 cri.go:89] found id: ""
	I1202 21:45:37.490242  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.490260  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:37.490266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:37.490349  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:37.515496  488914 cri.go:89] found id: ""
	I1202 21:45:37.515510  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.515516  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:37.515522  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:37.515578  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:37.544546  488914 cri.go:89] found id: ""
	I1202 21:45:37.544560  488914 logs.go:282] 0 containers: []
	W1202 21:45:37.544567  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:37.544575  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:37.544586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:37.617995  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:37.618023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:37.634282  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:37.634307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:37.704089  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:37.696265   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.697434   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.698656   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.699121   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:37.700652   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:37.704099  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:37.704110  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:37.780382  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:37.780402  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.308261  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:40.318898  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:40.318954  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:40.351388  488914 cri.go:89] found id: ""
	I1202 21:45:40.351403  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.351409  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:40.351415  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:40.351476  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:40.376844  488914 cri.go:89] found id: ""
	I1202 21:45:40.376857  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.376864  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:40.376869  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:40.376927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:40.400732  488914 cri.go:89] found id: ""
	I1202 21:45:40.400745  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.400752  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:40.400757  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:40.400816  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:40.446048  488914 cri.go:89] found id: ""
	I1202 21:45:40.446061  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.446067  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:40.446075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:40.446134  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:40.475997  488914 cri.go:89] found id: ""
	I1202 21:45:40.476011  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.476018  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:40.476023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:40.476081  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:40.501615  488914 cri.go:89] found id: ""
	I1202 21:45:40.501629  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.501636  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:40.501642  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:40.501705  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:40.526763  488914 cri.go:89] found id: ""
	I1202 21:45:40.526809  488914 logs.go:282] 0 containers: []
	W1202 21:45:40.526816  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:40.526831  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:40.526842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:40.542072  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:40.542088  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:40.603416  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:40.594977   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.595712   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.597533   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.598122   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:40.599848   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:40.603427  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:40.603437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:40.683775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:40.683797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:40.710561  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:40.710577  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.275783  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:43.286075  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:43.286135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:43.312011  488914 cri.go:89] found id: ""
	I1202 21:45:43.312026  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.312033  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:43.312039  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:43.312099  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:43.337316  488914 cri.go:89] found id: ""
	I1202 21:45:43.337330  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.337337  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:43.337359  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:43.337418  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:43.369627  488914 cri.go:89] found id: ""
	I1202 21:45:43.369641  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.369648  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:43.369653  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:43.369714  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:43.395672  488914 cri.go:89] found id: ""
	I1202 21:45:43.395686  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.395693  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:43.395698  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:43.395757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:43.436721  488914 cri.go:89] found id: ""
	I1202 21:45:43.436735  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.436742  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:43.436747  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:43.436808  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:43.468979  488914 cri.go:89] found id: ""
	I1202 21:45:43.468993  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.469008  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:43.469014  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:43.469084  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:43.500825  488914 cri.go:89] found id: ""
	I1202 21:45:43.500839  488914 logs.go:282] 0 containers: []
	W1202 21:45:43.500846  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:43.500854  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:43.500864  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:43.537110  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:43.537127  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:43.604154  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:43.604172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:43.619529  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:43.619546  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:43.684232  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:43.676801   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.677191   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.678735   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.679232   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:43.680785   12219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:43.684242  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:43.684253  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.262533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:46.273030  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:46.273094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:46.298023  488914 cri.go:89] found id: ""
	I1202 21:45:46.298039  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.298045  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:46.298051  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:46.298109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:46.327737  488914 cri.go:89] found id: ""
	I1202 21:45:46.327752  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.327760  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:46.327769  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:46.327834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:46.353980  488914 cri.go:89] found id: ""
	I1202 21:45:46.353994  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.354003  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:46.354008  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:46.354073  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:46.380386  488914 cri.go:89] found id: ""
	I1202 21:45:46.380400  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.380406  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:46.380412  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:46.380480  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:46.406595  488914 cri.go:89] found id: ""
	I1202 21:45:46.406609  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.406616  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:46.406621  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:46.406679  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:46.441216  488914 cri.go:89] found id: ""
	I1202 21:45:46.441230  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.441237  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:46.441242  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:46.441305  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:46.473258  488914 cri.go:89] found id: ""
	I1202 21:45:46.473272  488914 logs.go:282] 0 containers: []
	W1202 21:45:46.473279  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:46.473287  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:46.473298  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:46.490441  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:46.490458  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:46.554481  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:46.546212   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.546743   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548456   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.548932   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:46.550452   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:46.554490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:46.554501  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:46.631777  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:46.631800  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:46.660339  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:46.660355  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:49.231885  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:49.243758  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:49.243823  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:49.268714  488914 cri.go:89] found id: ""
	I1202 21:45:49.268728  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.268735  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:49.268741  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:49.268799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:49.293827  488914 cri.go:89] found id: ""
	I1202 21:45:49.293842  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.293849  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:49.293854  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:49.293919  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:49.319633  488914 cri.go:89] found id: ""
	I1202 21:45:49.319647  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.319654  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:49.319661  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:49.319720  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:49.350167  488914 cri.go:89] found id: ""
	I1202 21:45:49.350181  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.350188  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:49.350193  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:49.350252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:49.375814  488914 cri.go:89] found id: ""
	I1202 21:45:49.375828  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.375835  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:49.375841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:49.375905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:49.400638  488914 cri.go:89] found id: ""
	I1202 21:45:49.400657  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.400664  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:49.400670  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:49.400727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:49.453654  488914 cri.go:89] found id: ""
	I1202 21:45:49.453668  488914 logs.go:282] 0 containers: []
	W1202 21:45:49.453680  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:49.453689  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:49.453699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:49.479146  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:49.479161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:49.548448  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:49.540286   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.541087   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.542829   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.543435   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:49.545034   12416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:49.548457  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:49.548468  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:49.628739  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:49.628759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:49.658161  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:49.658177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:52.223612  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:52.234793  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:52.234899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:52.265577  488914 cri.go:89] found id: ""
	I1202 21:45:52.265591  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.265598  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:52.265603  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:52.265663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:52.292373  488914 cri.go:89] found id: ""
	I1202 21:45:52.292387  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.292394  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:52.292399  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:52.292466  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:52.317157  488914 cri.go:89] found id: ""
	I1202 21:45:52.317171  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.317178  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:52.317183  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:52.317240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:52.347843  488914 cri.go:89] found id: ""
	I1202 21:45:52.347856  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.347863  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:52.347868  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:52.347927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:52.372874  488914 cri.go:89] found id: ""
	I1202 21:45:52.372889  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.372895  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:52.372900  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:52.372962  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:52.398247  488914 cri.go:89] found id: ""
	I1202 21:45:52.398260  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.398267  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:52.398273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:52.398330  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:52.445693  488914 cri.go:89] found id: ""
	I1202 21:45:52.445706  488914 logs.go:282] 0 containers: []
	W1202 21:45:52.445713  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:52.445721  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:52.445732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:52.465150  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:52.465167  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:52.540766  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:52.532627   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.533261   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.534855   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.535434   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:52.537057   12521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:52.540776  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:52.540797  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:52.618862  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:52.618882  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:52.648548  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:52.648565  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.221074  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:55.231158  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:55.231215  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:55.256269  488914 cri.go:89] found id: ""
	I1202 21:45:55.256282  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.256289  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:55.256294  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:55.256371  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:55.281345  488914 cri.go:89] found id: ""
	I1202 21:45:55.281360  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.281367  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:55.281372  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:55.281430  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:55.306779  488914 cri.go:89] found id: ""
	I1202 21:45:55.306793  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.306799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:55.306805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:55.306865  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:55.333304  488914 cri.go:89] found id: ""
	I1202 21:45:55.333318  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.333325  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:55.333333  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:55.333393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:55.358550  488914 cri.go:89] found id: ""
	I1202 21:45:55.358563  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.358570  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:55.358575  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:55.358638  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:55.387929  488914 cri.go:89] found id: ""
	I1202 21:45:55.387943  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.387951  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:55.387957  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:55.388020  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:55.426649  488914 cri.go:89] found id: ""
	I1202 21:45:55.426663  488914 logs.go:282] 0 containers: []
	W1202 21:45:55.426670  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:55.426678  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:55.426687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:55.519746  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:55.519772  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:55.554225  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:55.554241  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:55.622464  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:55.622484  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:45:55.638187  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:55.638213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:55.703154  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:55.694645   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.695247   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.697193   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.698046   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:55.699714   12643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.203385  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:45:58.213686  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:45:58.213750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:45:58.239330  488914 cri.go:89] found id: ""
	I1202 21:45:58.239344  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.239351  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:45:58.239356  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:45:58.239416  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:45:58.264371  488914 cri.go:89] found id: ""
	I1202 21:45:58.264385  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.264392  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:45:58.264397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:45:58.264454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:45:58.289420  488914 cri.go:89] found id: ""
	I1202 21:45:58.289434  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.289441  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:45:58.289446  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:45:58.289504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:45:58.317750  488914 cri.go:89] found id: ""
	I1202 21:45:58.317764  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.317772  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:45:58.317777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:45:58.317834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:45:58.341672  488914 cri.go:89] found id: ""
	I1202 21:45:58.341687  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.341694  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:45:58.341699  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:45:58.341764  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:45:58.366074  488914 cri.go:89] found id: ""
	I1202 21:45:58.366088  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.366094  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:45:58.366099  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:45:58.366160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:45:58.390704  488914 cri.go:89] found id: ""
	I1202 21:45:58.390718  488914 logs.go:282] 0 containers: []
	W1202 21:45:58.390724  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:45:58.390741  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:45:58.390751  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:45:58.474575  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:45:58.455174   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467202   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.467877   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469512   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:45:58.469779   12726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:45:58.474586  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:45:58.474598  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:45:58.558574  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:45:58.558604  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:45:58.589663  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:45:58.589680  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:45:58.656150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:45:58.656169  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.173977  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:01.186201  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:01.186270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:01.213408  488914 cri.go:89] found id: ""
	I1202 21:46:01.213424  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.213430  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:01.213436  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:01.213502  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:01.239993  488914 cri.go:89] found id: ""
	I1202 21:46:01.240007  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.240014  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:01.240019  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:01.240079  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:01.266106  488914 cri.go:89] found id: ""
	I1202 21:46:01.266120  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.266127  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:01.266132  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:01.266194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:01.292600  488914 cri.go:89] found id: ""
	I1202 21:46:01.292614  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.292621  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:01.292627  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:01.292689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:01.318438  488914 cri.go:89] found id: ""
	I1202 21:46:01.318453  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.318460  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:01.318466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:01.318530  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:01.344830  488914 cri.go:89] found id: ""
	I1202 21:46:01.344843  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.344850  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:01.344856  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:01.344914  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:01.370509  488914 cri.go:89] found id: ""
	I1202 21:46:01.370523  488914 logs.go:282] 0 containers: []
	W1202 21:46:01.370534  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:01.370541  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:01.370551  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:01.400108  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:01.400123  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:01.484583  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:01.484603  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:01.501311  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:01.501329  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:01.571182  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:01.562348   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.563495   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565118   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.565616   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:01.567293   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:01.571193  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:01.571204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.148935  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:04.159286  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:04.159346  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:04.191266  488914 cri.go:89] found id: ""
	I1202 21:46:04.191279  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.191286  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:04.191291  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:04.191350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:04.217195  488914 cri.go:89] found id: ""
	I1202 21:46:04.217209  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.217216  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:04.217221  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:04.217285  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:04.243674  488914 cri.go:89] found id: ""
	I1202 21:46:04.243689  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.243696  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:04.243701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:04.243760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:04.269892  488914 cri.go:89] found id: ""
	I1202 21:46:04.269905  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.269921  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:04.269927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:04.269998  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:04.296688  488914 cri.go:89] found id: ""
	I1202 21:46:04.296703  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.296711  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:04.296717  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:04.296785  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:04.322967  488914 cri.go:89] found id: ""
	I1202 21:46:04.322981  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.323017  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:04.323023  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:04.323091  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:04.348936  488914 cri.go:89] found id: ""
	I1202 21:46:04.348956  488914 logs.go:282] 0 containers: []
	W1202 21:46:04.348963  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:04.348972  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:04.348981  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:04.415190  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:04.415209  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:04.431456  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:04.431472  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:04.504661  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:04.496947   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.497391   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.498575   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.499350   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:04.500904   12945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:04.504671  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:04.504682  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:04.581468  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:04.581487  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:07.110404  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:07.120667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:07.120727  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:07.145924  488914 cri.go:89] found id: ""
	I1202 21:46:07.145938  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.145945  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:07.145950  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:07.146010  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:07.171187  488914 cri.go:89] found id: ""
	I1202 21:46:07.171200  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.171207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:07.171212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:07.171270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:07.197187  488914 cri.go:89] found id: ""
	I1202 21:46:07.197201  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.197208  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:07.197213  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:07.197272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:07.222713  488914 cri.go:89] found id: ""
	I1202 21:46:07.222728  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.222735  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:07.222740  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:07.222800  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:07.249213  488914 cri.go:89] found id: ""
	I1202 21:46:07.249226  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.249233  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:07.249239  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:07.249301  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:07.275464  488914 cri.go:89] found id: ""
	I1202 21:46:07.275478  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.275484  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:07.275490  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:07.275546  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:07.305137  488914 cri.go:89] found id: ""
	I1202 21:46:07.305151  488914 logs.go:282] 0 containers: []
	W1202 21:46:07.305166  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:07.305174  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:07.305187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:07.370440  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:07.370459  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:07.386336  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:07.386354  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:07.458373  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:07.450145   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.451013   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452690   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.452988   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:07.454469   13047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:07.458383  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:07.458395  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:07.542802  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:07.542822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:10.076833  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:10.087724  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:10.087819  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:10.114700  488914 cri.go:89] found id: ""
	I1202 21:46:10.114714  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.114722  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:10.114728  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:10.114794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:10.140632  488914 cri.go:89] found id: ""
	I1202 21:46:10.140646  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.140652  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:10.140658  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:10.140715  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:10.169820  488914 cri.go:89] found id: ""
	I1202 21:46:10.169834  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.169841  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:10.169850  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:10.169911  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:10.195172  488914 cri.go:89] found id: ""
	I1202 21:46:10.195186  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.195193  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:10.195199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:10.195262  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:10.229303  488914 cri.go:89] found id: ""
	I1202 21:46:10.229317  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.229324  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:10.229330  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:10.229392  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:10.257081  488914 cri.go:89] found id: ""
	I1202 21:46:10.257096  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.257102  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:10.257108  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:10.257168  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:10.283246  488914 cri.go:89] found id: ""
	I1202 21:46:10.283259  488914 logs.go:282] 0 containers: []
	W1202 21:46:10.283267  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:10.283274  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:10.283284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:10.351168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:10.351187  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:10.366368  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:10.366385  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:10.438623  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:10.429081   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431348   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.431791   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433355   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:10.433924   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:10.438633  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:10.438646  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:10.516775  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:10.516796  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:13.045661  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:13.056197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:13.056259  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:13.087662  488914 cri.go:89] found id: ""
	I1202 21:46:13.087675  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.087682  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:13.087688  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:13.087748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:13.113347  488914 cri.go:89] found id: ""
	I1202 21:46:13.113361  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.113368  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:13.113373  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:13.113432  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:13.139083  488914 cri.go:89] found id: ""
	I1202 21:46:13.139098  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.139105  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:13.139110  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:13.139181  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:13.165107  488914 cri.go:89] found id: ""
	I1202 21:46:13.165121  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.165128  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:13.165133  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:13.165196  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:13.190075  488914 cri.go:89] found id: ""
	I1202 21:46:13.190090  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.190107  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:13.190113  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:13.190180  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:13.219255  488914 cri.go:89] found id: ""
	I1202 21:46:13.219269  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.219276  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:13.219281  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:13.219342  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:13.245328  488914 cri.go:89] found id: ""
	I1202 21:46:13.245342  488914 logs.go:282] 0 containers: []
	W1202 21:46:13.245350  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:13.245358  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:13.245369  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:13.310150  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:13.310168  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:13.325530  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:13.325550  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:13.389916  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:13.382188   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.382836   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384508   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.384993   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:13.386473   13262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:13.389926  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:13.389938  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:13.474064  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:13.474083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:16.007285  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:16.018077  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:16.018147  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:16.048444  488914 cri.go:89] found id: ""
	I1202 21:46:16.048458  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.048465  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:16.048477  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:16.048539  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:16.075066  488914 cri.go:89] found id: ""
	I1202 21:46:16.075079  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.075085  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:16.075090  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:16.075152  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:16.100648  488914 cri.go:89] found id: ""
	I1202 21:46:16.100662  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.100669  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:16.100674  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:16.100732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:16.131449  488914 cri.go:89] found id: ""
	I1202 21:46:16.131463  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.131470  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:16.131475  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:16.131534  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:16.158249  488914 cri.go:89] found id: ""
	I1202 21:46:16.158263  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.158270  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:16.158276  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:16.158340  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:16.183613  488914 cri.go:89] found id: ""
	I1202 21:46:16.183627  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.183633  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:16.183641  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:16.183702  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:16.209461  488914 cri.go:89] found id: ""
	I1202 21:46:16.209475  488914 logs.go:282] 0 containers: []
	W1202 21:46:16.209483  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:16.209490  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:16.209500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:16.275500  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:16.275520  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:16.291181  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:16.291196  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:16.361346  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:16.353221   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.354005   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355626   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.355946   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:16.357477   13369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:16.361356  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:16.361368  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:16.437676  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:16.437697  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:18.967950  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:18.977983  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:18.978057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:19.007682  488914 cri.go:89] found id: ""
	I1202 21:46:19.007706  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.007714  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:19.007720  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:19.007794  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:19.033939  488914 cri.go:89] found id: ""
	I1202 21:46:19.033961  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.033969  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:19.033975  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:19.034042  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:19.059516  488914 cri.go:89] found id: ""
	I1202 21:46:19.059531  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.059544  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:19.059550  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:19.059616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:19.086051  488914 cri.go:89] found id: ""
	I1202 21:46:19.086065  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.086072  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:19.086078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:19.086135  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:19.110886  488914 cri.go:89] found id: ""
	I1202 21:46:19.110899  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.110906  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:19.110911  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:19.110969  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:19.137589  488914 cri.go:89] found id: ""
	I1202 21:46:19.137603  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.137610  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:19.137615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:19.137673  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:19.162755  488914 cri.go:89] found id: ""
	I1202 21:46:19.162769  488914 logs.go:282] 0 containers: []
	W1202 21:46:19.162776  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:19.162784  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:19.162794  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:19.189873  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:19.189888  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:19.255357  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:19.255375  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:19.270844  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:19.270861  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:19.340061  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:19.331455   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.332143   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.333672   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.334108   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:19.335622   13482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:19.340072  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:19.340089  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:21.925504  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:21.935839  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:21.935899  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:21.960350  488914 cri.go:89] found id: ""
	I1202 21:46:21.960363  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.960370  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:21.960375  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:21.960434  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:21.986080  488914 cri.go:89] found id: ""
	I1202 21:46:21.986097  488914 logs.go:282] 0 containers: []
	W1202 21:46:21.986105  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:21.986112  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:21.986174  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:22.014687  488914 cri.go:89] found id: ""
	I1202 21:46:22.014702  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.014709  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:22.014715  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:22.014778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:22.042230  488914 cri.go:89] found id: ""
	I1202 21:46:22.042245  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.042252  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:22.042257  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:22.042320  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:22.072112  488914 cri.go:89] found id: ""
	I1202 21:46:22.072126  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.072134  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:22.072139  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:22.072210  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:22.098531  488914 cri.go:89] found id: ""
	I1202 21:46:22.098555  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.098562  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:22.098568  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:22.098649  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:22.124074  488914 cri.go:89] found id: ""
	I1202 21:46:22.124088  488914 logs.go:282] 0 containers: []
	W1202 21:46:22.124095  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:22.124102  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:22.124112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:22.190291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:22.190311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:22.205264  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:22.205283  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:22.273286  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:22.264766   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.265364   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.266885   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.267553   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:22.269194   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:22.273308  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:22.273321  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:22.349070  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:22.349090  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:24.882662  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:24.893199  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:24.893260  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:24.918892  488914 cri.go:89] found id: ""
	I1202 21:46:24.918906  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.918913  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:24.918918  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:24.918977  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:24.944030  488914 cri.go:89] found id: ""
	I1202 21:46:24.944043  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.944050  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:24.944055  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:24.944115  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:24.969743  488914 cri.go:89] found id: ""
	I1202 21:46:24.969758  488914 logs.go:282] 0 containers: []
	W1202 21:46:24.969765  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:24.969770  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:24.969827  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:25.003432  488914 cri.go:89] found id: ""
	I1202 21:46:25.003449  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.003459  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:25.003466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:25.003573  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:25.030965  488914 cri.go:89] found id: ""
	I1202 21:46:25.030979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.030985  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:25.030991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:25.031072  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:25.057965  488914 cri.go:89] found id: ""
	I1202 21:46:25.057979  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.057986  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:25.057991  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:25.058048  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:25.085099  488914 cri.go:89] found id: ""
	I1202 21:46:25.085113  488914 logs.go:282] 0 containers: []
	W1202 21:46:25.085129  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:25.085137  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:25.085147  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:25.115538  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:25.115553  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:25.181412  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:25.181432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:25.196691  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:25.196712  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:25.261474  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:25.253377   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.253981   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.255584   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.256147   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:25.257741   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:25.261490  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:25.261500  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:27.838685  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:27.849142  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:27.849203  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:27.874519  488914 cri.go:89] found id: ""
	I1202 21:46:27.874533  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.874539  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:27.874545  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:27.874603  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:27.900185  488914 cri.go:89] found id: ""
	I1202 21:46:27.900198  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.900207  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:27.900212  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:27.900270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:27.926179  488914 cri.go:89] found id: ""
	I1202 21:46:27.926202  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.926209  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:27.926215  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:27.926280  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:27.951950  488914 cri.go:89] found id: ""
	I1202 21:46:27.951964  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.951971  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:27.951977  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:27.952034  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:27.976779  488914 cri.go:89] found id: ""
	I1202 21:46:27.976793  488914 logs.go:282] 0 containers: []
	W1202 21:46:27.976799  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:27.976804  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:27.976864  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:28.013447  488914 cri.go:89] found id: ""
	I1202 21:46:28.013462  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.013479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:28.013495  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:28.013562  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:28.041485  488914 cri.go:89] found id: ""
	I1202 21:46:28.041508  488914 logs.go:282] 0 containers: []
	W1202 21:46:28.041516  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:28.041524  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:28.041536  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:28.057180  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:28.057197  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:28.121537  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:28.113244   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.113943   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.115648   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.116208   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:28.117879   13783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:28.121548  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:28.121559  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:28.197190  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:28.197210  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:28.229525  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:28.229541  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:30.795826  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:30.806266  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:30.806329  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:30.834208  488914 cri.go:89] found id: ""
	I1202 21:46:30.834222  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.834229  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:30.834234  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:30.834293  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:30.859664  488914 cri.go:89] found id: ""
	I1202 21:46:30.859678  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.859685  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:30.859690  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:30.859748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:30.889034  488914 cri.go:89] found id: ""
	I1202 21:46:30.889048  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.889055  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:30.889061  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:30.889117  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:30.914676  488914 cri.go:89] found id: ""
	I1202 21:46:30.914689  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.914696  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:30.914701  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:30.914759  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:30.939761  488914 cri.go:89] found id: ""
	I1202 21:46:30.939774  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.939782  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:30.939787  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:30.939843  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:30.965463  488914 cri.go:89] found id: ""
	I1202 21:46:30.965476  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.965483  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:30.965488  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:30.965545  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:30.990187  488914 cri.go:89] found id: ""
	I1202 21:46:30.990200  488914 logs.go:282] 0 containers: []
	W1202 21:46:30.990206  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:30.990224  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:30.990236  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:31.005797  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:31.005813  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:31.069684  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:31.062028   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.062610   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064158   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.064666   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:31.066156   13886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:31.069694  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:31.069707  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:31.145787  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:31.145809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:31.178743  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:31.178759  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:33.744496  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:33.754580  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:33.754651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:33.779528  488914 cri.go:89] found id: ""
	I1202 21:46:33.779541  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.779548  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:33.779554  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:33.779616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:33.804198  488914 cri.go:89] found id: ""
	I1202 21:46:33.804212  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.804219  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:33.804227  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:33.804289  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:33.829645  488914 cri.go:89] found id: ""
	I1202 21:46:33.829659  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.829666  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:33.829675  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:33.829734  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:33.858338  488914 cri.go:89] found id: ""
	I1202 21:46:33.858352  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.858368  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:33.858375  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:33.858433  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:33.884555  488914 cri.go:89] found id: ""
	I1202 21:46:33.884570  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.884578  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:33.884583  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:33.884651  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:33.912967  488914 cri.go:89] found id: ""
	I1202 21:46:33.912981  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.912988  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:33.912994  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:33.913055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:33.938088  488914 cri.go:89] found id: ""
	I1202 21:46:33.938102  488914 logs.go:282] 0 containers: []
	W1202 21:46:33.938110  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:33.938118  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:33.938133  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:34.003604  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:34.003631  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:34.022128  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:34.022146  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:34.092004  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:34.083929   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.084375   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086257   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.086725   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:34.088064   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:34.092015  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:34.092029  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:34.169499  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:34.169519  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:36.700051  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:36.711435  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:36.711497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:36.738690  488914 cri.go:89] found id: ""
	I1202 21:46:36.738704  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.738711  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:36.738717  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:36.738776  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:36.765789  488914 cri.go:89] found id: ""
	I1202 21:46:36.765802  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.765810  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:36.765815  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:36.765880  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:36.790056  488914 cri.go:89] found id: ""
	I1202 21:46:36.790070  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.790077  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:36.790082  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:36.790138  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:36.818201  488914 cri.go:89] found id: ""
	I1202 21:46:36.818214  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.818221  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:36.818227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:36.818288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:36.845623  488914 cri.go:89] found id: ""
	I1202 21:46:36.845637  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.845644  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:36.845650  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:36.845710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:36.871336  488914 cri.go:89] found id: ""
	I1202 21:46:36.871350  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.871357  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:36.871362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:36.871427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:36.897589  488914 cri.go:89] found id: ""
	I1202 21:46:36.897605  488914 logs.go:282] 0 containers: []
	W1202 21:46:36.897611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:36.897619  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:36.897630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:36.913198  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:36.913213  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:36.973711  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:36.965706   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.966427   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.967404   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.968855   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:36.969298   14095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:36.973721  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:36.973732  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:37.054868  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:37.054889  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:37.083961  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:37.083976  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.651305  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:39.662125  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:39.662189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:39.693251  488914 cri.go:89] found id: ""
	I1202 21:46:39.693264  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.693271  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:39.693277  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:39.693333  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:39.720953  488914 cri.go:89] found id: ""
	I1202 21:46:39.720969  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.720976  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:39.720981  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:39.721039  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:39.747423  488914 cri.go:89] found id: ""
	I1202 21:46:39.747436  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.747443  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:39.747448  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:39.747512  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:39.773314  488914 cri.go:89] found id: ""
	I1202 21:46:39.773328  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.773335  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:39.773340  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:39.773396  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:39.801946  488914 cri.go:89] found id: ""
	I1202 21:46:39.801960  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.801966  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:39.801971  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:39.802027  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:39.831169  488914 cri.go:89] found id: ""
	I1202 21:46:39.831182  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.831189  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:39.831195  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:39.831255  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:39.855958  488914 cri.go:89] found id: ""
	I1202 21:46:39.855972  488914 logs.go:282] 0 containers: []
	W1202 21:46:39.855979  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:39.855987  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:39.855997  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:39.921041  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:39.921076  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:39.936417  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:39.936433  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:40.005449  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:39.993742   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.994635   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996381   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.996674   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:39.998192   14201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:40.005465  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:40.005479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:40.099731  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:40.099754  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.632158  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:42.642592  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:42.642655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:42.680753  488914 cri.go:89] found id: ""
	I1202 21:46:42.680767  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.680774  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:42.680780  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:42.680845  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:42.727033  488914 cri.go:89] found id: ""
	I1202 21:46:42.727047  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.727056  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:42.727062  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:42.727125  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:42.753808  488914 cri.go:89] found id: ""
	I1202 21:46:42.753822  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.753829  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:42.753848  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:42.753906  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:42.782178  488914 cri.go:89] found id: ""
	I1202 21:46:42.782192  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.782200  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:42.782206  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:42.782272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:42.807839  488914 cri.go:89] found id: ""
	I1202 21:46:42.807853  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.807860  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:42.807867  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:42.807927  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:42.834250  488914 cri.go:89] found id: ""
	I1202 21:46:42.834276  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.834283  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:42.834290  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:42.834355  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:42.861699  488914 cri.go:89] found id: ""
	I1202 21:46:42.861721  488914 logs.go:282] 0 containers: []
	W1202 21:46:42.861728  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:42.861736  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:42.861747  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:42.937587  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:42.937608  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:42.969352  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:42.969374  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:43.035113  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:43.035138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:43.050909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:43.050924  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:43.116601  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:43.107713   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.108431   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.110316   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.111086   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:43.112866   14318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.616905  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:45.627026  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:45.627089  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:45.653296  488914 cri.go:89] found id: ""
	I1202 21:46:45.653311  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.653318  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:45.653323  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:45.653389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:45.685320  488914 cri.go:89] found id: ""
	I1202 21:46:45.685334  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.685342  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:45.685347  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:45.685407  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:45.714439  488914 cri.go:89] found id: ""
	I1202 21:46:45.714453  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.714460  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:45.714466  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:45.714524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:45.741650  488914 cri.go:89] found id: ""
	I1202 21:46:45.741665  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.741672  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:45.741678  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:45.741748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:45.768339  488914 cri.go:89] found id: ""
	I1202 21:46:45.768374  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.768381  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:45.768387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:45.768446  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:45.793382  488914 cri.go:89] found id: ""
	I1202 21:46:45.793396  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.793404  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:45.793410  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:45.793470  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:45.821520  488914 cri.go:89] found id: ""
	I1202 21:46:45.821534  488914 logs.go:282] 0 containers: []
	W1202 21:46:45.821541  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:45.821549  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:45.821560  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:45.836636  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:45.836657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:45.903141  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:45.894421   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.895256   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897082   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.897803   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:45.899654   14407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:45.903152  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:45.903182  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:45.983151  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:45.983172  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:46.016509  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:46.016525  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:48.589533  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:48.600004  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:48.600063  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:48.624724  488914 cri.go:89] found id: ""
	I1202 21:46:48.624738  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.624745  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:48.624751  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:48.624809  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:48.649307  488914 cri.go:89] found id: ""
	I1202 21:46:48.649322  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.649329  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:48.649335  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:48.649393  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:48.689464  488914 cri.go:89] found id: ""
	I1202 21:46:48.689477  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.689484  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:48.689489  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:48.689548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:48.718180  488914 cri.go:89] found id: ""
	I1202 21:46:48.718195  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.718202  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:48.718207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:48.718274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:48.748759  488914 cri.go:89] found id: ""
	I1202 21:46:48.748773  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.748781  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:48.748786  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:48.748847  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:48.773610  488914 cri.go:89] found id: ""
	I1202 21:46:48.773624  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.773631  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:48.773637  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:48.773694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:48.798539  488914 cri.go:89] found id: ""
	I1202 21:46:48.798553  488914 logs.go:282] 0 containers: []
	W1202 21:46:48.798560  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:48.798568  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:48.798580  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:48.813434  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:48.813450  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:48.873005  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:48.865979   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.866496   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.867575   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.868055   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:48.869544   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:48.873016  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:48.873027  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:48.949124  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:48.949143  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:48.981243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:48.981259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.549061  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:51.558950  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:51.559026  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:51.583587  488914 cri.go:89] found id: ""
	I1202 21:46:51.583601  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.583608  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:51.583614  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:51.583674  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:51.609150  488914 cri.go:89] found id: ""
	I1202 21:46:51.609163  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.609170  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:51.609175  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:51.609237  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:51.634897  488914 cri.go:89] found id: ""
	I1202 21:46:51.634910  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.634917  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:51.634922  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:51.634980  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:51.665746  488914 cri.go:89] found id: ""
	I1202 21:46:51.665760  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.665766  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:51.665772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:51.665830  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:51.704219  488914 cri.go:89] found id: ""
	I1202 21:46:51.704233  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.704240  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:51.704246  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:51.704310  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:51.736171  488914 cri.go:89] found id: ""
	I1202 21:46:51.736194  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.736202  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:51.736207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:51.736274  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:51.765446  488914 cri.go:89] found id: ""
	I1202 21:46:51.765469  488914 logs.go:282] 0 containers: []
	W1202 21:46:51.765476  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:51.765484  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:51.765494  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:51.792551  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:51.792566  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:51.857688  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:51.857706  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:51.873199  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:51.873214  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:51.942299  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:51.934624   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.935273   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.936792   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.937322   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:51.938323   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:51.942311  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:51.942323  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.519031  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:54.529427  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:54.529497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:54.558708  488914 cri.go:89] found id: ""
	I1202 21:46:54.558722  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.558729  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:54.558735  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:54.558796  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:54.583135  488914 cri.go:89] found id: ""
	I1202 21:46:54.583148  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.583155  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:54.583160  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:54.583221  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:54.609361  488914 cri.go:89] found id: ""
	I1202 21:46:54.609382  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.609390  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:54.609396  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:54.609461  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:54.637663  488914 cri.go:89] found id: ""
	I1202 21:46:54.637677  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.637683  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:54.637691  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:54.637748  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:54.666901  488914 cri.go:89] found id: ""
	I1202 21:46:54.666915  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.666922  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:54.666927  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:54.666987  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:54.695329  488914 cri.go:89] found id: ""
	I1202 21:46:54.695343  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.695350  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:54.695355  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:54.695413  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:54.724947  488914 cri.go:89] found id: ""
	I1202 21:46:54.724961  488914 logs.go:282] 0 containers: []
	W1202 21:46:54.724967  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:54.724975  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:54.724986  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:46:54.742963  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:54.742980  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:54.810513  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:54.803073   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.803954   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805454   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.805860   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:54.806992   14720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:54.810523  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:54.810534  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:54.883552  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:54.883571  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:54.911389  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:54.911406  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.481762  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:46:57.492870  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:46:57.492930  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:46:57.517199  488914 cri.go:89] found id: ""
	I1202 21:46:57.517213  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.517220  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:46:57.517225  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:46:57.517292  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:46:57.543039  488914 cri.go:89] found id: ""
	I1202 21:46:57.543053  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.543060  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:46:57.543066  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:46:57.543130  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:46:57.567509  488914 cri.go:89] found id: ""
	I1202 21:46:57.567524  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.567530  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:46:57.567536  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:46:57.567597  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:46:57.593052  488914 cri.go:89] found id: ""
	I1202 21:46:57.593074  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.593081  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:46:57.593087  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:46:57.593151  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:46:57.618537  488914 cri.go:89] found id: ""
	I1202 21:46:57.618551  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.618558  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:46:57.618563  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:46:57.618626  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:46:57.645917  488914 cri.go:89] found id: ""
	I1202 21:46:57.645931  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.645938  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:46:57.645943  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:46:57.646003  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:46:57.673325  488914 cri.go:89] found id: ""
	I1202 21:46:57.673338  488914 logs.go:282] 0 containers: []
	W1202 21:46:57.673353  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:46:57.673362  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:46:57.673378  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:46:57.748284  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:46:57.740291   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.740917   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.742583   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.743218   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:46:57.744902   14823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:46:57.748294  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:46:57.748305  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:46:57.828296  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:46:57.828314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:46:57.855830  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:46:57.855846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:46:57.921121  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:46:57.921140  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:00.436836  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:00.448366  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:00.448436  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:00.478939  488914 cri.go:89] found id: ""
	I1202 21:47:00.478953  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.478960  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:00.478969  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:00.479059  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:00.505959  488914 cri.go:89] found id: ""
	I1202 21:47:00.505974  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.505981  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:00.505986  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:00.506050  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:00.532568  488914 cri.go:89] found id: ""
	I1202 21:47:00.532584  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.532597  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:00.532602  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:00.532667  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:00.561666  488914 cri.go:89] found id: ""
	I1202 21:47:00.561680  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.561687  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:00.561692  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:00.561753  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:00.588051  488914 cri.go:89] found id: ""
	I1202 21:47:00.588065  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.588072  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:00.588078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:00.588139  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:00.612422  488914 cri.go:89] found id: ""
	I1202 21:47:00.612437  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.612443  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:00.612449  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:00.612513  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:00.642069  488914 cri.go:89] found id: ""
	I1202 21:47:00.642082  488914 logs.go:282] 0 containers: []
	W1202 21:47:00.642089  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:00.642097  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:00.642108  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:00.727511  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:00.716696   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.717383   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.721543   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.722286   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:00.724054   14923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:00.727520  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:00.727531  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:00.803650  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:00.803671  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:00.832608  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:00.832624  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:00.900692  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:00.900713  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.417333  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:03.427135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:03.427205  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:03.451551  488914 cri.go:89] found id: ""
	I1202 21:47:03.451566  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.451573  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:03.451578  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:03.451635  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:03.476736  488914 cri.go:89] found id: ""
	I1202 21:47:03.476750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.476757  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:03.476763  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:03.476825  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:03.501736  488914 cri.go:89] found id: ""
	I1202 21:47:03.501750  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.501756  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:03.501761  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:03.501820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:03.527339  488914 cri.go:89] found id: ""
	I1202 21:47:03.527353  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.527360  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:03.527365  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:03.527427  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:03.552910  488914 cri.go:89] found id: ""
	I1202 21:47:03.552923  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.552930  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:03.552936  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:03.552994  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:03.578110  488914 cri.go:89] found id: ""
	I1202 21:47:03.578124  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.578130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:03.578135  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:03.578194  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:03.603194  488914 cri.go:89] found id: ""
	I1202 21:47:03.603208  488914 logs.go:282] 0 containers: []
	W1202 21:47:03.603215  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:03.603223  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:03.603233  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:03.688154  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:03.688174  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:03.725392  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:03.725408  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:03.791852  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:03.791873  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:03.807065  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:03.807080  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:03.882666  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:03.872630   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.873205   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875257   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.875918   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:03.877748   15058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.384350  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:06.394676  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:06.394749  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:06.423508  488914 cri.go:89] found id: ""
	I1202 21:47:06.423523  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.423530  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:06.423536  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:06.423595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:06.449675  488914 cri.go:89] found id: ""
	I1202 21:47:06.449689  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.449696  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:06.449701  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:06.449762  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:06.480053  488914 cri.go:89] found id: ""
	I1202 21:47:06.480066  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.480073  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:06.480078  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:06.480140  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:06.508415  488914 cri.go:89] found id: ""
	I1202 21:47:06.508428  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.508435  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:06.508440  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:06.508498  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:06.533743  488914 cri.go:89] found id: ""
	I1202 21:47:06.533756  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.533763  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:06.533776  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:06.533836  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:06.558457  488914 cri.go:89] found id: ""
	I1202 21:47:06.558472  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.558479  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:06.558484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:06.558548  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:06.585312  488914 cri.go:89] found id: ""
	I1202 21:47:06.585326  488914 logs.go:282] 0 containers: []
	W1202 21:47:06.585333  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:06.585341  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:06.585352  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:06.600648  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:06.600665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:06.677036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:06.666806   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668050   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.668918   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.670752   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:06.671466   15142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:06.677046  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:06.677058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:06.757223  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:06.757244  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:06.785439  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:06.785455  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.357941  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:09.369144  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:09.369207  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:09.398056  488914 cri.go:89] found id: ""
	I1202 21:47:09.398070  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.398077  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:09.398083  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:09.398143  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:09.424606  488914 cri.go:89] found id: ""
	I1202 21:47:09.424620  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.424628  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:09.424633  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:09.424694  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:09.451520  488914 cri.go:89] found id: ""
	I1202 21:47:09.451535  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.451542  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:09.451547  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:09.451607  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:09.477315  488914 cri.go:89] found id: ""
	I1202 21:47:09.477330  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.477337  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:09.477344  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:09.477399  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:09.503654  488914 cri.go:89] found id: ""
	I1202 21:47:09.503668  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.503675  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:09.503680  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:09.503750  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:09.529545  488914 cri.go:89] found id: ""
	I1202 21:47:09.529558  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.529565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:09.529571  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:09.529629  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:09.554726  488914 cri.go:89] found id: ""
	I1202 21:47:09.554740  488914 logs.go:282] 0 containers: []
	W1202 21:47:09.554747  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:09.554754  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:09.554767  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:09.620273  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:09.620293  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:09.635655  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:09.635672  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:09.720524  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:09.711753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.712492   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.714140   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715224   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:09.715753   15247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:09.720534  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:09.720544  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:09.800379  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:09.800400  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:12.331221  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:12.341899  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:12.341957  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:12.369642  488914 cri.go:89] found id: ""
	I1202 21:47:12.369656  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.369663  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:12.369668  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:12.369729  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:12.395917  488914 cri.go:89] found id: ""
	I1202 21:47:12.395930  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.395938  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:12.395943  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:12.396015  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:12.422817  488914 cri.go:89] found id: ""
	I1202 21:47:12.422831  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.422838  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:12.422843  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:12.422903  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:12.451973  488914 cri.go:89] found id: ""
	I1202 21:47:12.451986  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.451993  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:12.451998  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:12.452057  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:12.477543  488914 cri.go:89] found id: ""
	I1202 21:47:12.477557  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.477564  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:12.477569  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:12.477627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:12.504941  488914 cri.go:89] found id: ""
	I1202 21:47:12.504954  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.504961  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:12.504967  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:12.505025  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:12.530800  488914 cri.go:89] found id: ""
	I1202 21:47:12.530821  488914 logs.go:282] 0 containers: []
	W1202 21:47:12.530828  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:12.530836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:12.530846  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:12.596910  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:12.596929  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:12.612316  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:12.612333  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:12.684014  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:12.674817   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.675729   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.677493   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.678254   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:12.680040   15351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:12.684025  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:12.684039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:12.771749  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:12.771771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:15.304325  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:15.315385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:15.315451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:15.341411  488914 cri.go:89] found id: ""
	I1202 21:47:15.341427  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.341434  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:15.341439  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:15.341501  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:15.366798  488914 cri.go:89] found id: ""
	I1202 21:47:15.366811  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.366818  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:15.366824  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:15.366884  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:15.391138  488914 cri.go:89] found id: ""
	I1202 21:47:15.391152  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.391159  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:15.391164  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:15.391226  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:15.415514  488914 cri.go:89] found id: ""
	I1202 21:47:15.415528  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.415535  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:15.415540  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:15.415595  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:15.440750  488914 cri.go:89] found id: ""
	I1202 21:47:15.440764  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.440771  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:15.440777  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:15.440839  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:15.469806  488914 cri.go:89] found id: ""
	I1202 21:47:15.469820  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.469827  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:15.469833  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:15.469891  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:15.497648  488914 cri.go:89] found id: ""
	I1202 21:47:15.497661  488914 logs.go:282] 0 containers: []
	W1202 21:47:15.497668  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:15.497675  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:15.497687  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:15.567654  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:15.567679  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:15.582770  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:15.582785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:15.647132  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:15.638484   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.639308   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641247   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.641864   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:15.643617   15459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:15.647143  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:15.647154  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:15.740463  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:15.740492  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.270232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:18.280720  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:18.280782  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:18.305710  488914 cri.go:89] found id: ""
	I1202 21:47:18.305724  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.305731  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:18.305736  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:18.305793  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:18.329526  488914 cri.go:89] found id: ""
	I1202 21:47:18.329539  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.329545  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:18.329550  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:18.329606  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:18.355166  488914 cri.go:89] found id: ""
	I1202 21:47:18.355195  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.355202  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:18.355207  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:18.355275  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:18.381992  488914 cri.go:89] found id: ""
	I1202 21:47:18.382006  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.382013  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:18.382018  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:18.382080  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:18.410268  488914 cri.go:89] found id: ""
	I1202 21:47:18.410283  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.410290  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:18.410296  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:18.410354  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:18.434607  488914 cri.go:89] found id: ""
	I1202 21:47:18.434620  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.434627  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:18.434632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:18.434689  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:18.460092  488914 cri.go:89] found id: ""
	I1202 21:47:18.460106  488914 logs.go:282] 0 containers: []
	W1202 21:47:18.460112  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:18.460120  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:18.460130  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:18.525571  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:18.517461   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.518031   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.519652   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.520213   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:18.521831   15556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:18.525580  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:18.525591  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:18.601752  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:18.601776  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:18.631242  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:18.631258  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:18.706458  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:18.706478  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.222232  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:21.232120  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:21.232178  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:21.257057  488914 cri.go:89] found id: ""
	I1202 21:47:21.257071  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.257078  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:21.257089  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:21.257145  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:21.281739  488914 cri.go:89] found id: ""
	I1202 21:47:21.281752  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.281759  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:21.281764  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:21.281820  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:21.306878  488914 cri.go:89] found id: ""
	I1202 21:47:21.306892  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.306899  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:21.306905  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:21.306959  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:21.332327  488914 cri.go:89] found id: ""
	I1202 21:47:21.332340  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.332347  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:21.332352  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:21.332408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:21.356717  488914 cri.go:89] found id: ""
	I1202 21:47:21.356730  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.356737  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:21.356742  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:21.356799  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:21.380787  488914 cri.go:89] found id: ""
	I1202 21:47:21.380801  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.380807  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:21.380813  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:21.380867  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:21.405984  488914 cri.go:89] found id: ""
	I1202 21:47:21.405998  488914 logs.go:282] 0 containers: []
	W1202 21:47:21.406005  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:21.406013  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:21.406023  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:21.438420  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:21.438435  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:21.503149  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:21.503170  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:21.518755  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:21.518771  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:21.584415  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:21.575466   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.576263   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.577599   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.578775   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:21.579539   15676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:21.584425  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:21.584437  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.161915  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:24.172338  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:24.172401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:24.197081  488914 cri.go:89] found id: ""
	I1202 21:47:24.197095  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.197102  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:24.197108  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:24.197166  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:24.222792  488914 cri.go:89] found id: ""
	I1202 21:47:24.222806  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.222827  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:24.222833  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:24.222898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:24.248463  488914 cri.go:89] found id: ""
	I1202 21:47:24.248486  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.248495  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:24.248500  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:24.248561  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:24.282539  488914 cri.go:89] found id: ""
	I1202 21:47:24.282554  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.282561  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:24.282567  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:24.282636  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:24.308071  488914 cri.go:89] found id: ""
	I1202 21:47:24.308086  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.308093  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:24.308098  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:24.308165  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:24.333666  488914 cri.go:89] found id: ""
	I1202 21:47:24.333689  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.333696  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:24.333702  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:24.333769  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:24.363212  488914 cri.go:89] found id: ""
	I1202 21:47:24.363226  488914 logs.go:282] 0 containers: []
	W1202 21:47:24.363233  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:24.363254  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:24.363265  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:24.428642  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:24.428664  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:24.444347  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:24.444363  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:24.510036  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:24.501704   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.502115   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.503735   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.504102   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:24.505628   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:24.510047  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:24.510058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:24.585705  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:24.585726  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:27.116827  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:27.127233  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:27.127299  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:27.156311  488914 cri.go:89] found id: ""
	I1202 21:47:27.156325  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.156332  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:27.156337  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:27.156401  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:27.180597  488914 cri.go:89] found id: ""
	I1202 21:47:27.180611  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.180617  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:27.180623  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:27.180682  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:27.205333  488914 cri.go:89] found id: ""
	I1202 21:47:27.205347  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.205354  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:27.205359  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:27.205417  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:27.231165  488914 cri.go:89] found id: ""
	I1202 21:47:27.231179  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.231186  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:27.231192  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:27.231251  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:27.260640  488914 cri.go:89] found id: ""
	I1202 21:47:27.260654  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.260662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:27.260667  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:27.260732  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:27.286552  488914 cri.go:89] found id: ""
	I1202 21:47:27.286566  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.286573  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:27.286578  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:27.286637  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:27.311590  488914 cri.go:89] found id: ""
	I1202 21:47:27.311604  488914 logs.go:282] 0 containers: []
	W1202 21:47:27.311611  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:27.311619  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:27.311630  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:27.376291  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:27.376311  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:27.391299  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:27.391314  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:27.452046  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:27.444398   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.445076   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.446669   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.447208   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:27.448668   15873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:27.452056  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:27.452067  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:27.527099  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:27.527119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.055495  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:30.067197  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:30.067272  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:30.093385  488914 cri.go:89] found id: ""
	I1202 21:47:30.093400  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.093407  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:30.093413  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:30.093475  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:30.120468  488914 cri.go:89] found id: ""
	I1202 21:47:30.120482  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.120490  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:30.120495  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:30.120558  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:30.147744  488914 cri.go:89] found id: ""
	I1202 21:47:30.147759  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.147767  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:30.147772  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:30.147838  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:30.173628  488914 cri.go:89] found id: ""
	I1202 21:47:30.173650  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.173658  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:30.173664  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:30.173742  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:30.201952  488914 cri.go:89] found id: ""
	I1202 21:47:30.201992  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.202001  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:30.202007  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:30.202075  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:30.228366  488914 cri.go:89] found id: ""
	I1202 21:47:30.228380  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.228387  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:30.228399  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:30.228468  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:30.254412  488914 cri.go:89] found id: ""
	I1202 21:47:30.254426  488914 logs.go:282] 0 containers: []
	W1202 21:47:30.254434  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:30.254442  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:30.254453  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:30.330454  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:30.330474  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:30.364243  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:30.364259  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:30.429823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:30.429841  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:30.445036  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:30.445058  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:30.506029  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:30.498290   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.499032   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500527   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.500960   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:30.502484   15990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.006821  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:33.017853  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:33.017924  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:33.043314  488914 cri.go:89] found id: ""
	I1202 21:47:33.043328  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.043335  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:33.043343  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:33.043402  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:33.068806  488914 cri.go:89] found id: ""
	I1202 21:47:33.068820  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.068826  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:33.068831  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:33.068889  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:33.097822  488914 cri.go:89] found id: ""
	I1202 21:47:33.097835  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.097842  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:33.097847  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:33.097905  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:33.123154  488914 cri.go:89] found id: ""
	I1202 21:47:33.123168  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.123176  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:33.123181  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:33.123240  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:33.148284  488914 cri.go:89] found id: ""
	I1202 21:47:33.148298  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.148305  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:33.148310  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:33.148369  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:33.173434  488914 cri.go:89] found id: ""
	I1202 21:47:33.173448  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.173454  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:33.173460  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:33.173519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:33.198619  488914 cri.go:89] found id: ""
	I1202 21:47:33.198633  488914 logs.go:282] 0 containers: []
	W1202 21:47:33.198640  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:33.198647  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:33.198662  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:33.263426  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:33.263446  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:33.279026  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:33.279042  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:33.339351  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:33.331868   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.332345   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334080   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.334388   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:33.335856   16082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:33.339361  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:33.339372  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:33.418569  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:33.418588  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:35.951124  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:35.962387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:35.962491  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:35.989088  488914 cri.go:89] found id: ""
	I1202 21:47:35.989102  488914 logs.go:282] 0 containers: []
	W1202 21:47:35.989109  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:35.989115  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:35.989176  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:36.017461  488914 cri.go:89] found id: ""
	I1202 21:47:36.017477  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.017484  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:36.017490  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:36.017614  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:36.046790  488914 cri.go:89] found id: ""
	I1202 21:47:36.046805  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.046812  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:36.046817  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:36.046875  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:36.073683  488914 cri.go:89] found id: ""
	I1202 21:47:36.073697  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.073704  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:36.073710  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:36.073767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:36.101900  488914 cri.go:89] found id: ""
	I1202 21:47:36.101914  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.101921  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:36.101926  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:36.101985  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:36.130435  488914 cri.go:89] found id: ""
	I1202 21:47:36.130449  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.130456  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:36.130462  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:36.130524  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:36.157134  488914 cri.go:89] found id: ""
	I1202 21:47:36.157148  488914 logs.go:282] 0 containers: []
	W1202 21:47:36.157155  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:36.157163  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:36.157173  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:36.221900  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:36.221919  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:36.237051  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:36.237068  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:36.299876  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:36.291935   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.292632   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294289   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.294810   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:36.296452   16189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:36.299886  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:36.299910  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:36.374213  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:36.374232  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:38.902545  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:38.913357  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:38.913415  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:38.944543  488914 cri.go:89] found id: ""
	I1202 21:47:38.944557  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.944563  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:38.944569  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:38.944627  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:38.975916  488914 cri.go:89] found id: ""
	I1202 21:47:38.975930  488914 logs.go:282] 0 containers: []
	W1202 21:47:38.975937  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:38.975942  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:38.976001  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:39.009795  488914 cri.go:89] found id: ""
	I1202 21:47:39.009810  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.009817  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:39.009823  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:39.009886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:39.034688  488914 cri.go:89] found id: ""
	I1202 21:47:39.034718  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.034726  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:39.034732  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:39.034805  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:39.059667  488914 cri.go:89] found id: ""
	I1202 21:47:39.059693  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.059701  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:39.059706  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:39.059767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:39.085837  488914 cri.go:89] found id: ""
	I1202 21:47:39.085851  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.085868  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:39.085873  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:39.085941  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:39.111280  488914 cri.go:89] found id: ""
	I1202 21:47:39.111295  488914 logs.go:282] 0 containers: []
	W1202 21:47:39.111302  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:39.111310  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:39.111320  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:39.175646  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:39.175668  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:39.190971  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:39.190987  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:39.258563  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:39.251357   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.251945   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253419   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.253861   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:39.254959   16292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:39.258573  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:39.258584  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:39.333779  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:39.333798  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:41.863817  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:41.873822  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:41.873882  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:41.899560  488914 cri.go:89] found id: ""
	I1202 21:47:41.899585  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.899592  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:41.899598  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:41.899663  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:41.937866  488914 cri.go:89] found id: ""
	I1202 21:47:41.937880  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.937887  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:41.937892  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:41.937960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:41.971862  488914 cri.go:89] found id: ""
	I1202 21:47:41.971876  488914 logs.go:282] 0 containers: []
	W1202 21:47:41.971901  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:41.971907  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:41.971975  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:42.010639  488914 cri.go:89] found id: ""
	I1202 21:47:42.010655  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.010663  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:42.010695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:42.010778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:42.040775  488914 cri.go:89] found id: ""
	I1202 21:47:42.040790  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.040800  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:42.040805  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:42.040881  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:42.072124  488914 cri.go:89] found id: ""
	I1202 21:47:42.072139  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.072149  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:42.072175  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:42.072252  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:42.105424  488914 cri.go:89] found id: ""
	I1202 21:47:42.105439  488914 logs.go:282] 0 containers: []
	W1202 21:47:42.105447  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:42.105456  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:42.105467  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:42.175007  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:42.175032  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:42.194759  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:42.194785  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:42.271235  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:42.261967   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.262745   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264485   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.264882   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:42.266741   16395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:42.271247  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:42.271260  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:42.360263  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:42.360296  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:44.892475  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:44.902425  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:44.902484  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:44.929930  488914 cri.go:89] found id: ""
	I1202 21:47:44.929944  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.929952  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:44.929957  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:44.930017  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:44.959205  488914 cri.go:89] found id: ""
	I1202 21:47:44.959219  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.959225  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:44.959231  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:44.959288  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:44.991335  488914 cri.go:89] found id: ""
	I1202 21:47:44.991350  488914 logs.go:282] 0 containers: []
	W1202 21:47:44.991357  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:44.991362  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:44.991437  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:45.047326  488914 cri.go:89] found id: ""
	I1202 21:47:45.047342  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.047350  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:45.047358  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:45.047440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:45.110770  488914 cri.go:89] found id: ""
	I1202 21:47:45.110787  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.110796  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:45.110803  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:45.110872  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:45.147274  488914 cri.go:89] found id: ""
	I1202 21:47:45.147290  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.147298  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:45.147304  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:45.147372  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:45.230398  488914 cri.go:89] found id: ""
	I1202 21:47:45.230413  488914 logs.go:282] 0 containers: []
	W1202 21:47:45.230421  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:45.230437  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:45.230457  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:45.315457  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:45.307106   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.308124   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.309943   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.310298   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:45.311989   16495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:45.315469  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:45.315479  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:45.391401  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:45.391421  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:45.422183  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:45.422200  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:45.491250  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:45.491269  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.007522  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:48.019509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:48.019579  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:48.047045  488914 cri.go:89] found id: ""
	I1202 21:47:48.047059  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.047066  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:48.047072  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:48.047133  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:48.073355  488914 cri.go:89] found id: ""
	I1202 21:47:48.073370  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.073377  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:48.073383  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:48.073443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:48.101623  488914 cri.go:89] found id: ""
	I1202 21:47:48.101640  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.101653  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:48.101658  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:48.101728  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:48.128708  488914 cri.go:89] found id: ""
	I1202 21:47:48.128722  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.128729  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:48.128734  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:48.128795  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:48.154337  488914 cri.go:89] found id: ""
	I1202 21:47:48.154352  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.154359  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:48.154364  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:48.154426  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:48.181724  488914 cri.go:89] found id: ""
	I1202 21:47:48.181739  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.181746  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:48.181752  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:48.181810  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:48.207628  488914 cri.go:89] found id: ""
	I1202 21:47:48.207641  488914 logs.go:282] 0 containers: []
	W1202 21:47:48.207648  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:48.207655  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:48.207665  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:48.273678  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:48.273699  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:48.289393  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:48.289410  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:48.353116  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:48.345571   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.346016   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347574   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.347915   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:48.349479   16606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:48.353126  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:48.353138  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:48.429785  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:48.429809  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:50.961028  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:50.971337  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:50.971408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:51.004925  488914 cri.go:89] found id: ""
	I1202 21:47:51.004941  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.004949  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:51.004956  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:51.005023  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:51.033852  488914 cri.go:89] found id: ""
	I1202 21:47:51.033866  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.033873  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:51.033879  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:51.033951  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:51.065370  488914 cri.go:89] found id: ""
	I1202 21:47:51.065384  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.065392  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:51.065397  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:51.065454  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:51.091797  488914 cri.go:89] found id: ""
	I1202 21:47:51.091811  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.091819  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:51.091824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:51.091886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:51.118245  488914 cri.go:89] found id: ""
	I1202 21:47:51.118260  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.118267  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:51.118273  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:51.118350  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:51.144813  488914 cri.go:89] found id: ""
	I1202 21:47:51.144828  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.144835  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:51.144841  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:51.144898  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:51.170591  488914 cri.go:89] found id: ""
	I1202 21:47:51.170605  488914 logs.go:282] 0 containers: []
	W1202 21:47:51.170622  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:51.170630  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:51.170641  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:51.201061  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:51.201078  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:51.268903  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:51.268922  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:51.286516  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:51.286532  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:51.360635  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:51.352997   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.353506   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.354983   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.355562   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:51.357043   16720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:51.360647  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:51.360658  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:53.937801  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:53.951326  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:53.951403  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:53.981411  488914 cri.go:89] found id: ""
	I1202 21:47:53.981424  488914 logs.go:282] 0 containers: []
	W1202 21:47:53.981431  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:53.981444  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:53.981504  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:54.019553  488914 cri.go:89] found id: ""
	I1202 21:47:54.019568  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.019576  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:54.019581  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:54.019641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:54.045870  488914 cri.go:89] found id: ""
	I1202 21:47:54.045884  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.045891  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:54.045896  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:54.045960  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:54.072428  488914 cri.go:89] found id: ""
	I1202 21:47:54.072443  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.072450  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:54.072455  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:54.072519  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:54.098413  488914 cri.go:89] found id: ""
	I1202 21:47:54.098427  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.098434  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:54.098439  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:54.098497  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:54.124502  488914 cri.go:89] found id: ""
	I1202 21:47:54.124517  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.124524  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:54.124529  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:54.124589  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:54.151244  488914 cri.go:89] found id: ""
	I1202 21:47:54.151258  488914 logs.go:282] 0 containers: []
	W1202 21:47:54.151265  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:54.151273  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:54.151284  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:54.213677  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:54.205894   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.206296   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.207892   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.208209   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:54.209760   16807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:54.213688  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:54.213700  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:54.289814  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:54.289835  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:54.319415  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:54.319432  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:54.385725  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:54.385745  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:56.902920  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:56.915363  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:56.915439  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:56.942569  488914 cri.go:89] found id: ""
	I1202 21:47:56.942583  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.942590  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:56.942596  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:56.942655  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:56.975362  488914 cri.go:89] found id: ""
	I1202 21:47:56.975384  488914 logs.go:282] 0 containers: []
	W1202 21:47:56.975391  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:56.975397  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:56.975456  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:57.006861  488914 cri.go:89] found id: ""
	I1202 21:47:57.006877  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.006884  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:57.006890  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:57.006958  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:47:57.033667  488914 cri.go:89] found id: ""
	I1202 21:47:57.033682  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.033689  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:47:57.033695  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:47:57.033751  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:47:57.059458  488914 cri.go:89] found id: ""
	I1202 21:47:57.059472  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.059479  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:47:57.059484  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:47:57.059544  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:47:57.086098  488914 cri.go:89] found id: ""
	I1202 21:47:57.086112  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.086130  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:47:57.086136  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:47:57.086206  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:47:57.112732  488914 cri.go:89] found id: ""
	I1202 21:47:57.112747  488914 logs.go:282] 0 containers: []
	W1202 21:47:57.112754  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:47:57.112762  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:47:57.112773  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:47:57.141211  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:47:57.141226  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:47:57.210823  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:47:57.210842  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:47:57.226149  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:47:57.226166  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:47:57.287720  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:47:57.280020   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.280594   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282136   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.282592   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:47:57.284108   16929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:47:57.287730  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:47:57.287742  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:47:59.865507  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:47:59.875824  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:47:59.875886  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:47:59.901721  488914 cri.go:89] found id: ""
	I1202 21:47:59.901735  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.901741  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:47:59.901747  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:47:59.901834  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:47:59.938763  488914 cri.go:89] found id: ""
	I1202 21:47:59.938777  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.938784  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:47:59.938789  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:47:59.938844  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:47:59.968613  488914 cri.go:89] found id: ""
	I1202 21:47:59.968627  488914 logs.go:282] 0 containers: []
	W1202 21:47:59.968634  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:47:59.968639  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:47:59.968696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:00.011145  488914 cri.go:89] found id: ""
	I1202 21:48:00.011162  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.011172  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:00.011179  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:00.011248  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:00.128636  488914 cri.go:89] found id: ""
	I1202 21:48:00.128653  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.128662  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:00.128668  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:00.128743  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:00.191602  488914 cri.go:89] found id: ""
	I1202 21:48:00.191633  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.191642  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:00.191651  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:00.191735  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:00.286597  488914 cri.go:89] found id: ""
	I1202 21:48:00.286618  488914 logs.go:282] 0 containers: []
	W1202 21:48:00.286626  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:00.286635  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:00.286657  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:00.393972  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:00.394009  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:00.425438  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:00.425462  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:00.522799  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:00.513889   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.514733   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.515998   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.516488   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:00.518494   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:00.522810  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:00.522822  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:00.603332  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:00.603356  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.142041  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:03.152666  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:03.152730  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:03.179575  488914 cri.go:89] found id: ""
	I1202 21:48:03.179589  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.179596  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:03.179601  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:03.179666  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:03.208278  488914 cri.go:89] found id: ""
	I1202 21:48:03.208293  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.208300  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:03.208305  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:03.208365  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:03.237068  488914 cri.go:89] found id: ""
	I1202 21:48:03.237081  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.237088  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:03.237093  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:03.237150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:03.262185  488914 cri.go:89] found id: ""
	I1202 21:48:03.262199  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.262206  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:03.262212  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:03.262270  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:03.287056  488914 cri.go:89] found id: ""
	I1202 21:48:03.287076  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.287082  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:03.287088  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:03.287150  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:03.312745  488914 cri.go:89] found id: ""
	I1202 21:48:03.312759  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.312766  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:03.312774  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:03.312831  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:03.337493  488914 cri.go:89] found id: ""
	I1202 21:48:03.337507  488914 logs.go:282] 0 containers: []
	W1202 21:48:03.337514  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:03.337522  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:03.337535  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:03.398946  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:03.391250   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.392069   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393665   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.393959   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:03.395438   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:03.398957  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:03.398969  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:03.475063  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:03.475083  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:03.502836  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:03.502852  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:03.569966  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:03.569985  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.085423  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:06.096220  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:06.096284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:06.124362  488914 cri.go:89] found id: ""
	I1202 21:48:06.124378  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.124384  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:06.124392  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:06.124451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:06.150807  488914 cri.go:89] found id: ""
	I1202 21:48:06.150822  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.150829  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:06.150835  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:06.150896  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:06.177096  488914 cri.go:89] found id: ""
	I1202 21:48:06.177110  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.177117  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:06.177122  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:06.177189  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:06.202670  488914 cri.go:89] found id: ""
	I1202 21:48:06.202684  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.202691  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:06.202697  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:06.202760  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:06.227599  488914 cri.go:89] found id: ""
	I1202 21:48:06.227614  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.227626  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:06.227632  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:06.227692  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:06.252361  488914 cri.go:89] found id: ""
	I1202 21:48:06.252375  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.252381  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:06.252387  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:06.252443  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:06.278301  488914 cri.go:89] found id: ""
	I1202 21:48:06.278315  488914 logs.go:282] 0 containers: []
	W1202 21:48:06.278323  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:06.278331  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:06.278341  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:06.344608  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:06.344629  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:06.359909  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:06.359925  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:06.427972  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:06.420387   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.421055   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.422590   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.423028   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:06.424274   17227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:06.427982  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:06.427993  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:06.503390  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:06.503409  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.032284  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:09.043491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:09.043554  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:09.073343  488914 cri.go:89] found id: ""
	I1202 21:48:09.073358  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.073365  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:09.073371  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:09.073438  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:09.106311  488914 cri.go:89] found id: ""
	I1202 21:48:09.106325  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.106332  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:09.106337  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:09.106400  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:09.137607  488914 cri.go:89] found id: ""
	I1202 21:48:09.137622  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.137630  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:09.137635  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:09.137696  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:09.165465  488914 cri.go:89] found id: ""
	I1202 21:48:09.165479  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.165486  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:09.165491  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:09.165553  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:09.191695  488914 cri.go:89] found id: ""
	I1202 21:48:09.191709  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.191715  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:09.191721  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:09.191778  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:09.217199  488914 cri.go:89] found id: ""
	I1202 21:48:09.217213  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.217221  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:09.217227  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:09.217284  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:09.243947  488914 cri.go:89] found id: ""
	I1202 21:48:09.243961  488914 logs.go:282] 0 containers: []
	W1202 21:48:09.243977  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:09.243985  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:09.243995  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:09.259022  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:09.259038  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:09.325462  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:09.318310   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.318693   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320180   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.320473   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:09.321913   17332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:09.325472  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:09.325483  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:09.404565  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:09.404586  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:09.435844  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:09.435860  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:12.005527  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:12.017298  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:12.017364  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:12.043631  488914 cri.go:89] found id: ""
	I1202 21:48:12.043645  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.043652  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:12.043657  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:12.043717  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:12.072548  488914 cri.go:89] found id: ""
	I1202 21:48:12.072562  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.072569  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:12.072574  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:12.072634  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:12.097779  488914 cri.go:89] found id: ""
	I1202 21:48:12.097792  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.097799  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:12.097806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:12.097861  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:12.122380  488914 cri.go:89] found id: ""
	I1202 21:48:12.122394  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.122400  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:12.122406  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:12.122462  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:12.147485  488914 cri.go:89] found id: ""
	I1202 21:48:12.147499  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.147506  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:12.147511  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:12.147569  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:12.172352  488914 cri.go:89] found id: ""
	I1202 21:48:12.172372  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.172379  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:12.172385  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:12.172451  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:12.197386  488914 cri.go:89] found id: ""
	I1202 21:48:12.197400  488914 logs.go:282] 0 containers: []
	W1202 21:48:12.197406  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:12.197414  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:12.197425  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:12.212275  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:12.212291  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:12.283599  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:12.274650   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.275361   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.276431   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278180   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:12.278757   17435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:12.283609  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:12.283620  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:12.362146  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:12.362177  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:12.394426  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:12.394452  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:14.959300  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:14.969317  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:14.969378  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:14.995679  488914 cri.go:89] found id: ""
	I1202 21:48:14.995693  488914 logs.go:282] 0 containers: []
	W1202 21:48:14.995701  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:14.995706  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:14.995767  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:15.039291  488914 cri.go:89] found id: ""
	I1202 21:48:15.039307  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.039316  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:15.039322  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:15.039440  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:15.066778  488914 cri.go:89] found id: ""
	I1202 21:48:15.066793  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.066800  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:15.066806  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:15.066866  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:15.096009  488914 cri.go:89] found id: ""
	I1202 21:48:15.096031  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.096039  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:15.096045  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:15.096109  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:15.124965  488914 cri.go:89] found id: ""
	I1202 21:48:15.124980  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.124987  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:15.124992  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:15.125055  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:15.151140  488914 cri.go:89] found id: ""
	I1202 21:48:15.151155  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.151162  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:15.151168  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:15.151225  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:15.180343  488914 cri.go:89] found id: ""
	I1202 21:48:15.180362  488914 logs.go:282] 0 containers: []
	W1202 21:48:15.180369  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:15.180378  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:15.180389  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:15.245885  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:15.245905  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:15.261189  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:15.261204  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:15.329096  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:15.320945   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.321625   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323381   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.323999   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:15.325649   17542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:15.329106  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:15.329119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:15.404768  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:15.404789  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:17.936657  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:17.948615  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:17.948678  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:17.980274  488914 cri.go:89] found id: ""
	I1202 21:48:17.980288  488914 logs.go:282] 0 containers: []
	W1202 21:48:17.980295  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:17.980301  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:17.980358  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:18.009972  488914 cri.go:89] found id: ""
	I1202 21:48:18.009988  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.009995  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:18.010000  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:18.010068  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:18.037292  488914 cri.go:89] found id: ""
	I1202 21:48:18.037307  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.037314  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:18.037320  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:18.037389  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:18.068010  488914 cri.go:89] found id: ""
	I1202 21:48:18.068025  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.068034  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:18.068039  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:18.068100  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:18.098519  488914 cri.go:89] found id: ""
	I1202 21:48:18.098537  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.098545  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:18.098552  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:18.098616  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:18.125321  488914 cri.go:89] found id: ""
	I1202 21:48:18.125336  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.125343  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:18.125349  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:18.125408  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:18.154110  488914 cri.go:89] found id: ""
	I1202 21:48:18.154124  488914 logs.go:282] 0 containers: []
	W1202 21:48:18.154131  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:18.154139  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:18.154161  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:18.186862  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:18.186879  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:18.252168  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:18.252188  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:18.267297  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:18.267312  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:18.330969  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:18.322138   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.322985   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.324625   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.325317   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:18.326981   17660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:18.330979  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:18.330989  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:20.906864  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:20.918719  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:20.918779  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:20.946664  488914 cri.go:89] found id: ""
	I1202 21:48:20.946681  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.946688  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:20.946694  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:20.946757  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:20.973074  488914 cri.go:89] found id: ""
	I1202 21:48:20.973088  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.973095  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:20.973100  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:20.973160  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:20.998478  488914 cri.go:89] found id: ""
	I1202 21:48:20.998495  488914 logs.go:282] 0 containers: []
	W1202 21:48:20.998503  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:20.998509  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:20.998582  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:21.033676  488914 cri.go:89] found id: ""
	I1202 21:48:21.033691  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.033708  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:21.033714  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:21.033773  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:21.059527  488914 cri.go:89] found id: ""
	I1202 21:48:21.059549  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.059557  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:21.059562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:21.059623  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:21.088534  488914 cri.go:89] found id: ""
	I1202 21:48:21.088548  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.088555  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:21.088562  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:21.088618  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:21.114102  488914 cri.go:89] found id: ""
	I1202 21:48:21.114116  488914 logs.go:282] 0 containers: []
	W1202 21:48:21.114123  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:21.114130  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:21.114141  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:21.176428  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:21.168087   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.168660   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.170374   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.171027   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:21.172682   17748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:21.176438  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:21.176449  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:21.251600  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:21.251621  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:21.278584  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:21.278600  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:21.350258  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:21.350279  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:23.865709  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:23.876050  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:48:23.876119  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:48:23.906000  488914 cri.go:89] found id: ""
	I1202 21:48:23.906014  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.906021  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:48:23.906027  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:48:23.906094  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:48:23.934001  488914 cri.go:89] found id: ""
	I1202 21:48:23.934015  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.934022  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:48:23.934028  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:48:23.934088  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:48:23.969619  488914 cri.go:89] found id: ""
	I1202 21:48:23.969633  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.969640  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:48:23.969645  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:48:23.969710  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:48:23.997123  488914 cri.go:89] found id: ""
	I1202 21:48:23.997137  488914 logs.go:282] 0 containers: []
	W1202 21:48:23.997144  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:48:23.997149  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:48:23.997211  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:48:24.027561  488914 cri.go:89] found id: ""
	I1202 21:48:24.027576  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.027584  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:48:24.027590  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:48:24.027660  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:48:24.053543  488914 cri.go:89] found id: ""
	I1202 21:48:24.053558  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.053565  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:48:24.053570  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:48:24.053641  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:48:24.080080  488914 cri.go:89] found id: ""
	I1202 21:48:24.080094  488914 logs.go:282] 0 containers: []
	W1202 21:48:24.080101  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:48:24.080109  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:48:24.080119  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:48:24.147092  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:48:24.147112  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:48:24.162650  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:48:24.162666  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:48:24.225019  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:48:24.217597   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.218108   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.219630   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.220139   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:48:24.221601   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:48:24.225029  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:48:24.225039  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:48:24.300286  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:48:24.300307  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:48:26.831634  488914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:48:26.843079  488914 kubeadm.go:602] duration metric: took 4m3.730369294s to restartPrimaryControlPlane
	W1202 21:48:26.843152  488914 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:48:26.843233  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:48:27.259211  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:48:27.272350  488914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:48:27.280460  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:48:27.280517  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:48:27.288570  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:48:27.288578  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:48:27.288628  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:48:27.296654  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:48:27.296709  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:48:27.304086  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:48:27.311898  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:48:27.311953  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:48:27.319289  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.326825  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:48:27.326888  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:48:27.334620  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:48:27.342084  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:48:27.342139  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:48:27.349467  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:48:27.386582  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:48:27.386896  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:48:27.472364  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:48:27.472439  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:48:27.472489  488914 kubeadm.go:319] OS: Linux
	I1202 21:48:27.472545  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:48:27.472601  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:48:27.472644  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:48:27.472700  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:48:27.472753  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:48:27.472804  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:48:27.472859  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:48:27.472915  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:48:27.472973  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:48:27.543309  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:48:27.543431  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:48:27.543527  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:48:27.554036  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:48:27.559373  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:48:27.559468  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:48:27.559542  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:48:27.559629  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:48:27.559701  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:48:27.559787  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:48:27.559841  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:48:27.559915  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:48:27.559985  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:48:27.560076  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:48:27.560159  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:48:27.560210  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:48:27.560269  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:48:27.850282  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:48:28.505037  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:48:28.762985  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:48:28.951263  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:48:29.183372  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:48:29.184043  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:48:29.186561  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:48:29.189676  488914 out.go:252]   - Booting up control plane ...
	I1202 21:48:29.189765  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:48:29.189838  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:48:29.191619  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:48:29.207350  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:48:29.207778  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:48:29.215590  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:48:29.215853  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:48:29.216063  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:48:29.353309  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:48:29.353417  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:52:29.354218  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001230264s
	I1202 21:52:29.354245  488914 kubeadm.go:319] 
	I1202 21:52:29.354298  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:52:29.354329  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:52:29.354427  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:52:29.354432  488914 kubeadm.go:319] 
	I1202 21:52:29.354529  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:52:29.354559  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:52:29.354587  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:52:29.354590  488914 kubeadm.go:319] 
	I1202 21:52:29.358907  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:52:29.359370  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:52:29.359489  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:52:29.359719  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:52:29.359724  488914 kubeadm.go:319] 
	I1202 21:52:29.359816  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:52:29.359952  488914 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230264s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:52:29.360041  488914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 21:52:29.774288  488914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:52:29.786781  488914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:52:29.786832  488914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:52:29.794551  488914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:52:29.794562  488914 kubeadm.go:158] found existing configuration files:
	
	I1202 21:52:29.794615  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:52:29.802140  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:52:29.802200  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:52:29.809778  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:52:29.817315  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:52:29.817375  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:52:29.824944  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.832581  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:52:29.832636  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:52:29.840105  488914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:52:29.848039  488914 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:52:29.848102  488914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:52:29.855571  488914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:52:29.895459  488914 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:52:29.895508  488914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:52:29.966851  488914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:52:29.966918  488914 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:52:29.966952  488914 kubeadm.go:319] OS: Linux
	I1202 21:52:29.967027  488914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:52:29.967074  488914 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:52:29.967120  488914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:52:29.967166  488914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:52:29.967212  488914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:52:29.967259  488914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:52:29.967302  488914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:52:29.967348  488914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:52:29.967393  488914 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:52:30.044273  488914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:52:30.044406  488914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:52:30.044512  488914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:52:30.059289  488914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:52:30.064606  488914 out.go:252]   - Generating certificates and keys ...
	I1202 21:52:30.064707  488914 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:52:30.064778  488914 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:52:30.064861  488914 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:52:30.064927  488914 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:52:30.065002  488914 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:52:30.065061  488914 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:52:30.065130  488914 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:52:30.065197  488914 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:52:30.065280  488914 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:52:30.065358  488914 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:52:30.065394  488914 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:52:30.065457  488914 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:52:30.391272  488914 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:52:30.580061  488914 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:52:30.892953  488914 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:52:31.052311  488914 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:52:31.356833  488914 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:52:31.357398  488914 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:52:31.360444  488914 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:52:31.363666  488914 out.go:252]   - Booting up control plane ...
	I1202 21:52:31.363767  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:52:31.363843  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:52:31.364787  488914 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:52:31.380952  488914 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:52:31.381067  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:52:31.389182  488914 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:52:31.389514  488914 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:52:31.389769  488914 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:52:31.510935  488914 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:52:31.511077  488914 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:56:31.511610  488914 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001043188s
	I1202 21:56:31.511635  488914 kubeadm.go:319] 
	I1202 21:56:31.511691  488914 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:56:31.511724  488914 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:56:31.511828  488914 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:56:31.511833  488914 kubeadm.go:319] 
	I1202 21:56:31.511936  488914 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:56:31.511966  488914 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:56:31.511996  488914 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:56:31.511999  488914 kubeadm.go:319] 
	I1202 21:56:31.516147  488914 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:56:31.516591  488914 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:56:31.516707  488914 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:56:31.516982  488914 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:56:31.516989  488914 kubeadm.go:319] 
	I1202 21:56:31.517086  488914 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:56:31.517154  488914 kubeadm.go:403] duration metric: took 12m8.4399317s to StartCluster
	I1202 21:56:31.517186  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:56:31.517279  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:56:31.545508  488914 cri.go:89] found id: ""
	I1202 21:56:31.545521  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.545528  488914 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:56:31.545538  488914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 21:56:31.545593  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:56:31.573505  488914 cri.go:89] found id: ""
	I1202 21:56:31.573519  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.573526  488914 logs.go:284] No container was found matching "etcd"
	I1202 21:56:31.573532  488914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 21:56:31.573594  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:56:31.598620  488914 cri.go:89] found id: ""
	I1202 21:56:31.598634  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.598642  488914 logs.go:284] No container was found matching "coredns"
	I1202 21:56:31.598647  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:56:31.598718  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:56:31.624500  488914 cri.go:89] found id: ""
	I1202 21:56:31.624514  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.624522  488914 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:56:31.624528  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:56:31.624590  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:56:31.650576  488914 cri.go:89] found id: ""
	I1202 21:56:31.650591  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.650598  488914 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:56:31.650604  488914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:56:31.650665  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:56:31.677681  488914 cri.go:89] found id: ""
	I1202 21:56:31.677696  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.677703  488914 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:56:31.677709  488914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 21:56:31.677772  488914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:56:31.702889  488914 cri.go:89] found id: ""
	I1202 21:56:31.702903  488914 logs.go:282] 0 containers: []
	W1202 21:56:31.702910  488914 logs.go:284] No container was found matching "kindnet"
	I1202 21:56:31.702918  488914 logs.go:123] Gathering logs for kubelet ...
	I1202 21:56:31.702928  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:56:31.769428  488914 logs.go:123] Gathering logs for dmesg ...
	I1202 21:56:31.769447  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:56:31.784680  488914 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:56:31.784696  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:56:31.848558  488914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:56:31.839494   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.840234   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.842167   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.843113   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:31.844989   21636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:56:31.848570  488914 logs.go:123] Gathering logs for CRI-O ...
	I1202 21:56:31.848581  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 21:56:31.924323  488914 logs.go:123] Gathering logs for container status ...
	I1202 21:56:31.924343  488914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:56:31.952600  488914 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:56:31.952640  488914 out.go:285] * 
	W1202 21:56:31.952744  488914 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.952799  488914 out.go:285] * 
	W1202 21:56:31.955203  488914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:56:31.960375  488914 out.go:203] 
	W1202 21:56:31.963105  488914 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:56:31.963144  488914 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:56:31.963163  488914 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:56:31.966130  488914 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.45283707Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=bf00db59-611c-44fb-b66b-5de338fe239d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486207629Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486338707Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:40 functional-066896 crio[10511]: time="2025-12-02T21:56:40.486372447Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=9cc93f8b-10f8-469c-8715-a6d3e45c1f15 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.31254149Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=02dfde09-63cb-48a9-bc75-2498ded8aebd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338777762Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338914322Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.338952624Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=918b3d12-01d6-4ffb-bd80-8ec3fd1d1682 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364142306Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364305064Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:41 functional-066896 crio[10511]: time="2025-12-02T21:56:41.364345213Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a0dde857-d717-496d-8774-09a527eb58de name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.448620533Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=2c172ded-5053-4702-8981-86fe65b3eb5a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473261763Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473491575Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.473554164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=e5751169-d5f6-4cd7-a025-a6773006a5a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502089674Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502268679Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:42 functional-066896 crio[10511]: time="2025-12-02T21:56:42.502308638Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=a174c8e4-4ebc-4699-aea7-c098f14ab693 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.270878698Z" level=info msg="Checking image status: kicbase/echo-server:functional-066896" id=6683c882-fed2-46df-a5c6-4c16ad59fbea name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300274442Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-066896" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300423301Z" level=info msg="Image docker.io/kicbase/echo-server:functional-066896 not found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.300466198Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-066896 found" id=f394d3ca-2986-4196-9d60-fad23b58cd49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325738621Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-066896" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325897843Z" level=info msg="Image localhost/kicbase/echo-server:functional-066896 not found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 21:56:43 functional-066896 crio[10511]: time="2025-12-02T21:56:43.325952326Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-066896 found" id=ff06a2a9-88a9-48e9-a908-641e39bc6443 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:56:45.540455   22618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:45.541181   22618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:45.542137   22618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:45.543762   22618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:56:45.544220   22618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036471] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.767807] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 2 18:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 19:19] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000075d2ab1{9P.session} n=0000000067ef044f
	[  +0.001116] FS-Cache: O-key=[10] '34323935383131383138'
	[  +0.000780] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000075d2ab1{9P.session} n=000000000abdbbb6
	[  +0.001090] FS-Cache: N-key=[10] '34323935383131383138'
	[Dec 2 20:15] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 2 21:07] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 2 21:09] overlayfs: idmapped layers are currently not supported
	[  +0.079260] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 21:15] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:16] overlayfs: idmapped layers are currently not supported
	[Dec 2 21:29] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:56:45 up  3:38,  0 user,  load average: 0.47, 0.23, 0.34
	Linux functional-066896 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:56:42 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:43 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 978.
	Dec 02 21:56:43 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:43 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:43 functional-066896 kubelet[22449]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:43 functional-066896 kubelet[22449]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:43 functional-066896 kubelet[22449]: E1202 21:56:43.474755   22449 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:43 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:43 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:44 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 979.
	Dec 02 21:56:44 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:44 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:44 functional-066896 kubelet[22514]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:44 functional-066896 kubelet[22514]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:44 functional-066896 kubelet[22514]: E1202 21:56:44.230275   22514 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:44 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:44 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:56:44 functional-066896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 980.
	Dec 02 21:56:44 functional-066896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:44 functional-066896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:56:44 functional-066896 kubelet[22535]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:44 functional-066896 kubelet[22535]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 21:56:44 functional-066896 kubelet[22535]: E1202 21:56:44.982893   22535 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:56:44 functional-066896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:56:44 functional-066896 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-066896 -n functional-066896: exit status 2 (360.270501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-066896" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1202 21:56:38.857465  501578 out.go:360] Setting OutFile to fd 1 ...
I1202 21:56:38.857697  501578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:56:38.857711  501578 out.go:374] Setting ErrFile to fd 2...
I1202 21:56:38.857717  501578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:56:38.858367  501578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:56:38.860089  501578 mustload.go:66] Loading cluster: functional-066896
I1202 21:56:38.862731  501578 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:56:38.863268  501578 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:56:38.892943  501578 host.go:66] Checking if "functional-066896" exists ...
I1202 21:56:38.893229  501578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:56:39.063478  501578 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:56:39.048210186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:56:39.063617  501578 api_server.go:166] Checking apiserver status ...
I1202 21:56:39.063757  501578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 21:56:39.063844  501578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:56:39.092651  501578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
W1202 21:56:39.208799  501578 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 21:56:39.212819  501578 out.go:179] * The control-plane node functional-066896 apiserver is not running: (state=Stopped)
I1202 21:56:39.216224  501578 out.go:179]   To start a cluster, run: "minikube start -p functional-066896"

                                                
                                                
stdout: * The control-plane node functional-066896 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-066896"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 501579: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-066896 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-066896 apply -f testdata/testsvc.yaml: exit status 1 (141.211536ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-066896 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (117.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.101.145.220": Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-066896 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-066896 get svc nginx-svc: exit status 1 (58.533701ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-066896 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (117.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image load --daemon kicbase/echo-server:functional-066896 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-066896" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image load --daemon kicbase/echo-server:functional-066896 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-066896" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-066896
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image load --daemon kicbase/echo-server:functional-066896 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-066896" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image save kicbase/echo-server:functional-066896 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 21:56:43.649981  502395 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:56:43.650175  502395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:56:43.650206  502395 out.go:374] Setting ErrFile to fd 2...
	I1202 21:56:43.650229  502395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:56:43.650520  502395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:56:43.651229  502395 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:56:43.651420  502395 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:56:43.651969  502395 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
	I1202 21:56:43.671474  502395 ssh_runner.go:195] Run: systemctl --version
	I1202 21:56:43.671537  502395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
	I1202 21:56:43.691840  502395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
	I1202 21:56:43.793702  502395 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1202 21:56:43.793766  502395 cache_images.go:255] Failed to load cached images for "functional-066896": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1202 21:56:43.793788  502395 cache_images.go:267] failed pushing to: functional-066896

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-066896
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image save --daemon kicbase/echo-server:functional-066896 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-066896
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-066896: exit status 1 (18.847215ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-066896

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-066896

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764712610748691523" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764712610748691523" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764712610748691523" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001/test-1764712610748691523
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.524926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:56:51.106487  447211 retry.go:31] will retry after 713.690278ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 21:56 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 21:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 21:56 test-1764712610748691523
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh cat /mount-9p/test-1764712610748691523
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-066896 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-066896 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.338661ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-066896 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (270.288488ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=33227)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  2 21:56 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  2 21:56 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  2 21:56 test-1764712610748691523
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-066896 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:33227
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001:/mount-9p --alsologtostderr -v=1] stderr:
I1202 21:56:50.804919  503787 out.go:360] Setting OutFile to fd 1 ...
I1202 21:56:50.805070  503787 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:56:50.805081  503787 out.go:374] Setting ErrFile to fd 2...
I1202 21:56:50.805087  503787 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:56:50.805436  503787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:56:50.805743  503787 mustload.go:66] Loading cluster: functional-066896
I1202 21:56:50.806368  503787 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:56:50.807361  503787 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:56:50.832142  503787 host.go:66] Checking if "functional-066896" exists ...
I1202 21:56:50.832415  503787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:56:50.941664  503787 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:56:50.930552759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:56:50.941826  503787 cli_runner.go:164] Run: docker network inspect functional-066896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 21:56:50.980056  503787 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001 into VM as /mount-9p ...
I1202 21:56:50.983895  503787 out.go:179]   - Mount type:   9p
I1202 21:56:50.989745  503787 out.go:179]   - User ID:      docker
I1202 21:56:50.992828  503787 out.go:179]   - Group ID:     docker
I1202 21:56:50.996462  503787 out.go:179]   - Version:      9p2000.L
I1202 21:56:50.999940  503787 out.go:179]   - Message Size: 262144
I1202 21:56:51.003354  503787 out.go:179]   - Options:      map[]
I1202 21:56:51.006477  503787 out.go:179]   - Bind Address: 192.168.49.1:33227
I1202 21:56:51.009440  503787 out.go:179] * Userspace file server: 
I1202 21:56:51.009818  503787 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1202 21:56:51.009940  503787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:56:51.035311  503787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
I1202 21:56:51.145878  503787 mount.go:180] unmount for /mount-9p ran successfully
I1202 21:56:51.145935  503787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1202 21:56:51.154283  503787 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=33227,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1202 21:56:51.164815  503787 main.go:127] stdlog: ufs.go:141 connected
I1202 21:56:51.164969  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tversion tag 65535 msize 262144 version '9P2000.L'
I1202 21:56:51.165003  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rversion tag 65535 msize 262144 version '9P2000'
I1202 21:56:51.165267  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1202 21:56:51.165322  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rattach tag 0 aqid (f16103 e1117bba 'd')
I1202 21:56:51.167157  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 0
I1202 21:56:51.167228  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16103 e1117bba 'd') m d775 at 0 mt 1764712610 l 4096 t 0 d 0 ext )
I1202 21:56:51.168764  503787 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/.mount-process: {Name:mk046aec45aa286ca6e6a4914e480d077fcb811b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:56:51.168941  503787 mount.go:105] mount successful: ""
I1202 21:56:51.172368  503787 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2642165225/001 to /mount-9p
I1202 21:56:51.175232  503787 out.go:203] 
I1202 21:56:51.178019  503787 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1202 21:56:52.351236  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 0
I1202 21:56:52.351321  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16103 e1117bba 'd') m d775 at 0 mt 1764712610 l 4096 t 0 d 0 ext )
I1202 21:56:52.351663  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 1 
I1202 21:56:52.351697  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 
I1202 21:56:52.351838  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Topen tag 0 fid 1 mode 0
I1202 21:56:52.351923  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Ropen tag 0 qid (f16103 e1117bba 'd') iounit 0
I1202 21:56:52.352055  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 0
I1202 21:56:52.352092  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16103 e1117bba 'd') m d775 at 0 mt 1764712610 l 4096 t 0 d 0 ext )
I1202 21:56:52.352247  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 0 count 262120
I1202 21:56:52.352362  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 258
I1202 21:56:52.352485  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 258 count 261862
I1202 21:56:52.352521  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.352652  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:56:52.352676  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.352800  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1202 21:56:52.352833  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16104 e1117bba '') 
I1202 21:56:52.352971  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.353006  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16104 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.353132  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.353163  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16104 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.353296  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.353325  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.353450  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 2 0:'test-1764712610748691523' 
I1202 21:56:52.353488  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16107 e1117bba '') 
I1202 21:56:52.353619  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.353652  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('test-1764712610748691523' 'jenkins' 'jenkins' '' q (f16107 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.353776  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.353808  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('test-1764712610748691523' 'jenkins' 'jenkins' '' q (f16107 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.353933  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.353956  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.354080  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1202 21:56:52.354120  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16106 e1117bba '') 
I1202 21:56:52.354233  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.354271  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16106 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.354393  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.354433  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16106 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.354545  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.354569  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.354698  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:56:52.354728  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.354866  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 1
I1202 21:56:52.354899  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.651480  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 1 0:'test-1764712610748691523' 
I1202 21:56:52.651559  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16107 e1117bba '') 
I1202 21:56:52.651761  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 1
I1202 21:56:52.651812  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('test-1764712610748691523' 'jenkins' 'jenkins' '' q (f16107 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.651968  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 1 newfid 2 
I1202 21:56:52.652000  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 
I1202 21:56:52.652120  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Topen tag 0 fid 2 mode 0
I1202 21:56:52.652172  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Ropen tag 0 qid (f16107 e1117bba '') iounit 0
I1202 21:56:52.652305  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 1
I1202 21:56:52.652350  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('test-1764712610748691523' 'jenkins' 'jenkins' '' q (f16107 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.652495  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 2 offset 0 count 262120
I1202 21:56:52.652539  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 24
I1202 21:56:52.652664  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 2 offset 24 count 262120
I1202 21:56:52.652727  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.652871  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 2 offset 24 count 262120
I1202 21:56:52.652905  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.653053  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.653108  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.653337  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 1
I1202 21:56:52.653370  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.985722  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 0
I1202 21:56:52.985798  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16103 e1117bba 'd') m d775 at 0 mt 1764712610 l 4096 t 0 d 0 ext )
I1202 21:56:52.986146  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 1 
I1202 21:56:52.986195  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 
I1202 21:56:52.986321  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Topen tag 0 fid 1 mode 0
I1202 21:56:52.986371  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Ropen tag 0 qid (f16103 e1117bba 'd') iounit 0
I1202 21:56:52.986499  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 0
I1202 21:56:52.986533  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16103 e1117bba 'd') m d775 at 0 mt 1764712610 l 4096 t 0 d 0 ext )
I1202 21:56:52.986685  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 0 count 262120
I1202 21:56:52.986787  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 258
I1202 21:56:52.986933  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 258 count 261862
I1202 21:56:52.986963  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.987093  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:56:52.987118  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.987253  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1202 21:56:52.987285  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16104 e1117bba '') 
I1202 21:56:52.987407  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.987441  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16104 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.987576  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.987608  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16104 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.987719  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.987741  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.987868  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 2 0:'test-1764712610748691523' 
I1202 21:56:52.987898  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16107 e1117bba '') 
I1202 21:56:52.988006  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.988038  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('test-1764712610748691523' 'jenkins' 'jenkins' '' q (f16107 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.988164  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.988202  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('test-1764712610748691523' 'jenkins' 'jenkins' '' q (f16107 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.988322  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.988344  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.988477  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1202 21:56:52.988512  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rwalk tag 0 (f16106 e1117bba '') 
I1202 21:56:52.988620  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.988667  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16106 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.988787  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tstat tag 0 fid 2
I1202 21:56:52.988817  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16106 e1117bba '') m 644 at 0 mt 1764712610 l 24 t 0 d 0 ext )
I1202 21:56:52.988927  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 2
I1202 21:56:52.988950  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.989074  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:56:52.989097  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rread tag 0 count 0
I1202 21:56:52.989224  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 1
I1202 21:56:52.989249  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:52.990445  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1202 21:56:52.990504  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rerror tag 0 ename 'file not found' ecode 0
I1202 21:56:53.257495  503787 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44612 Tclunk tag 0 fid 0
I1202 21:56:53.257549  503787 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44612 Rclunk tag 0
I1202 21:56:53.258581  503787 main.go:127] stdlog: ufs.go:147 disconnected
I1202 21:56:53.278554  503787 out.go:179] * Unmounting /mount-9p ...
I1202 21:56:53.281678  503787 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1202 21:56:53.288638  503787 mount.go:180] unmount for /mount-9p ran successfully
I1202 21:56:53.288745  503787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/.mount-process: {Name:mk046aec45aa286ca6e6a4914e480d077fcb811b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:56:53.291806  503787 out.go:203] 
W1202 21:56:53.294812  503787 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1202 21:56:53.297783  503787 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-066896 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-066896 create deployment hello-node --image kicbase/echo-server: exit status 1 (57.299294ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-066896 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 service list: exit status 103 (258.377224ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-066896 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-066896"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-066896 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-066896 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-066896\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 service list -o json: exit status 103 (253.198672ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-066896 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-066896"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-066896 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 service --namespace=default --https --url hello-node: exit status 103 (283.066324ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-066896 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-066896"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-066896 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 service hello-node --url --format={{.IP}}: exit status 103 (271.027442ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-066896 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-066896"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-066896 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-066896 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-066896\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 service hello-node --url: exit status 103 (260.318903ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-066896 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-066896"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-066896 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-066896 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-066896"
functional_test.go:1579: failed to parse "* The control-plane node functional-066896 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-066896\"": parse "* The control-plane node functional-066896 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-066896\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-361713 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-361713 --output=json --user=testUser: exit status 80 (1.779832207s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd7a9313-5556-4eb1-a4d0-1aae8bcc0b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-361713 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e182216b-87ff-43af-8dae-2ac93637f74c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T22:13:52Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"3e69fe6f-4ae3-4c44-9d50-d3c5d77e39f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-361713 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-361713 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-361713 --output=json --user=testUser: exit status 80 (2.229313208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59ff56b3-84fe-4b4a-a8e6-be1a1521ffac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-361713 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"685199e6-5222-4a0e-a29e-549771e88522","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T22:13:54Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"311df26c-6047-4d86-8191-877e53935e23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-361713 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (802.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1202 22:31:39.333303  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.499815628s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-636006
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-636006: (3.029753162s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-636006 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-636006 status --format={{.Host}}: exit status 7 (218.461668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m35.297192056s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-636006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-636006" primary control-plane node in "kubernetes-upgrade-636006" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:32:16.153808  624674 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:32:16.153970  624674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:32:16.153977  624674 out.go:374] Setting ErrFile to fd 2...
	I1202 22:32:16.153981  624674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:32:16.154249  624674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:32:16.154603  624674 out.go:368] Setting JSON to false
	I1202 22:32:16.155544  624674 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15265,"bootTime":1764699472,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 22:32:16.155605  624674 start.go:143] virtualization:  
	I1202 22:32:16.161750  624674 out.go:179] * [kubernetes-upgrade-636006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:32:16.164789  624674 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:32:16.164848  624674 notify.go:221] Checking for updates...
	I1202 22:32:16.170728  624674 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:32:16.173562  624674 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:32:16.176357  624674 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 22:32:16.179688  624674 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:32:16.182552  624674 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:32:16.185814  624674 config.go:182] Loaded profile config "kubernetes-upgrade-636006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 22:32:16.186382  624674 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:32:16.225656  624674 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:32:16.225778  624674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:32:16.325741  624674 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-02 22:32:16.31636038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:32:16.325850  624674 docker.go:319] overlay module found
	I1202 22:32:16.329215  624674 out.go:179] * Using the docker driver based on existing profile
	I1202 22:32:16.331990  624674 start.go:309] selected driver: docker
	I1202 22:32:16.332015  624674 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-636006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-636006 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:32:16.332113  624674 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:32:16.332742  624674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:32:16.419757  624674 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-02 22:32:16.40964204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:32:16.420086  624674 cni.go:84] Creating CNI manager for ""
	I1202 22:32:16.420157  624674 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:32:16.420199  624674 start.go:353] cluster config:
	{Name:kubernetes-upgrade-636006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-636006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:32:16.423772  624674 out.go:179] * Starting "kubernetes-upgrade-636006" primary control-plane node in "kubernetes-upgrade-636006" cluster
	I1202 22:32:16.426599  624674 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 22:32:16.429559  624674 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:32:16.432375  624674 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 22:32:16.432557  624674 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:32:16.465395  624674 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:32:16.465422  624674 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:32:16.494425  624674 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 22:32:17.467677  624674 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 22:32:17.467872  624674 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/config.json ...
	I1202 22:32:17.467939  624674 cache.go:107] acquiring lock: {Name:mkdb548c2bac3a34960b3a6b545c5054893fbdbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468023  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:32:17.468032  624674 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.437µs
	I1202 22:32:17.468044  624674 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:32:17.468055  624674 cache.go:107] acquiring lock: {Name:mk839b44c66d3913032a714047c9f670b63ef5c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468093  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:32:17.468099  624674 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 45.424µs
	I1202 22:32:17.468105  624674 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:32:17.468114  624674 cache.go:107] acquiring lock: {Name:mk304f3f4096bbef7ba5732eb1c9aa925d997d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468127  624674 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:32:17.468143  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:32:17.468153  624674 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 34.921µs
	I1202 22:32:17.468159  624674 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:32:17.468156  624674 start.go:360] acquireMachinesLock for kubernetes-upgrade-636006: {Name:mkf84139e19ff882e28c7ee3329d94115e2ec821 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468180  624674 cache.go:107] acquiring lock: {Name:mk03cba00219db8d08f500539d1cc58f1aa2c195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468194  624674 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "kubernetes-upgrade-636006"
	I1202 22:32:17.468208  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:32:17.468215  624674 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 36.284µs
	I1202 22:32:17.468220  624674 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:32:17.468208  624674 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:32:17.468229  624674 fix.go:54] fixHost starting: 
	I1202 22:32:17.468225  624674 cache.go:107] acquiring lock: {Name:mka11ffa1d74c5fb1433b9ef620b1f0c932c9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468265  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:32:17.468271  624674 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.508µs
	I1202 22:32:17.468277  624674 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:32:17.468288  624674 cache.go:107] acquiring lock: {Name:mk9d447fd14c5eb7766d3d9fb118eaa73c4b13e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468327  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:32:17.468332  624674 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 45.826µs
	I1202 22:32:17.468338  624674 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:32:17.468347  624674 cache.go:107] acquiring lock: {Name:mk0172f7efb3e46eb92d9fd6ce76c4cf051093bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468373  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:32:17.468377  624674 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.574µs
	I1202 22:32:17.468383  624674 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:32:17.468394  624674 cache.go:107] acquiring lock: {Name:mk13b34174fa20d0742c1b37457f9c68005668b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:32:17.468419  624674 cache.go:115] /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:32:17.468423  624674 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.542µs
	I1202 22:32:17.468429  624674 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:32:17.468436  624674 cache.go:87] Successfully saved all images to host disk.
	I1202 22:32:17.468492  624674 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-636006 --format={{.State.Status}}
	I1202 22:32:17.484828  624674 fix.go:112] recreateIfNeeded on kubernetes-upgrade-636006: state=Stopped err=<nil>
	W1202 22:32:17.484867  624674 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:32:17.490541  624674 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-636006" ...
	I1202 22:32:17.490629  624674 cli_runner.go:164] Run: docker start kubernetes-upgrade-636006
	I1202 22:32:17.733005  624674 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-636006 --format={{.State.Status}}
	I1202 22:32:17.760322  624674 kic.go:430] container "kubernetes-upgrade-636006" state is running.
	I1202 22:32:17.760704  624674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-636006
	I1202 22:32:17.781128  624674 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/config.json ...
	I1202 22:32:17.781468  624674 machine.go:94] provisionDockerMachine start ...
	I1202 22:32:17.781591  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:17.803970  624674 main.go:143] libmachine: Using SSH client type: native
	I1202 22:32:17.804668  624674 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I1202 22:32:17.804683  624674 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:32:17.805435  624674 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 22:32:20.971863  624674 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-636006
	
	I1202 22:32:20.971887  624674 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-636006"
	I1202 22:32:20.971954  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:20.997881  624674 main.go:143] libmachine: Using SSH client type: native
	I1202 22:32:20.998177  624674 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I1202 22:32:20.998195  624674 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-636006 && echo "kubernetes-upgrade-636006" | sudo tee /etc/hostname
	I1202 22:32:21.169792  624674 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-636006
	
	I1202 22:32:21.169880  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:21.192533  624674 main.go:143] libmachine: Using SSH client type: native
	I1202 22:32:21.192846  624674 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I1202 22:32:21.192863  624674 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-636006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-636006/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-636006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:32:21.356047  624674 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:32:21.356071  624674 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 22:32:21.356108  624674 ubuntu.go:190] setting up certificates
	I1202 22:32:21.356118  624674 provision.go:84] configureAuth start
	I1202 22:32:21.356179  624674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-636006
	I1202 22:32:21.379263  624674 provision.go:143] copyHostCerts
	I1202 22:32:21.379336  624674 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 22:32:21.379348  624674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 22:32:21.379412  624674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 22:32:21.379515  624674 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 22:32:21.379520  624674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 22:32:21.379543  624674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 22:32:21.379597  624674 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 22:32:21.379601  624674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 22:32:21.379621  624674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 22:32:21.379666  624674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-636006 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-636006 localhost minikube]
	I1202 22:32:21.621887  624674 provision.go:177] copyRemoteCerts
	I1202 22:32:21.621954  624674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:32:21.622013  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:21.640575  624674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/kubernetes-upgrade-636006/id_rsa Username:docker}
	I1202 22:32:21.751882  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 22:32:21.780224  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1202 22:32:21.803510  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 22:32:21.825431  624674 provision.go:87] duration metric: took 469.29152ms to configureAuth
	I1202 22:32:21.825470  624674 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:32:21.825649  624674 config.go:182] Loaded profile config "kubernetes-upgrade-636006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 22:32:21.825747  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:21.842721  624674 main.go:143] libmachine: Using SSH client type: native
	I1202 22:32:21.843123  624674 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I1202 22:32:21.843142  624674 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 22:32:22.281356  624674 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 22:32:22.281382  624674 machine.go:97] duration metric: took 4.499900954s to provisionDockerMachine
	I1202 22:32:22.281394  624674 start.go:293] postStartSetup for "kubernetes-upgrade-636006" (driver="docker")
	I1202 22:32:22.281407  624674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:32:22.281474  624674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:32:22.281528  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:22.303700  624674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/kubernetes-upgrade-636006/id_rsa Username:docker}
	I1202 22:32:22.408052  624674 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:32:22.411789  624674 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:32:22.411859  624674 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:32:22.411884  624674 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 22:32:22.411968  624674 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 22:32:22.412085  624674 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 22:32:22.412238  624674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:32:22.420556  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:32:22.439945  624674 start.go:296] duration metric: took 158.535505ms for postStartSetup
	I1202 22:32:22.440041  624674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:32:22.440094  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:22.460061  624674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/kubernetes-upgrade-636006/id_rsa Username:docker}
	I1202 22:32:22.565118  624674 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:32:22.570569  624674 fix.go:56] duration metric: took 5.102333002s for fixHost
	I1202 22:32:22.570594  624674 start.go:83] releasing machines lock for "kubernetes-upgrade-636006", held for 5.102392014s
	I1202 22:32:22.570666  624674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-636006
	I1202 22:32:22.596171  624674 ssh_runner.go:195] Run: cat /version.json
	I1202 22:32:22.596223  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:22.596478  624674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:32:22.596529  624674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-636006
	I1202 22:32:22.625172  624674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/kubernetes-upgrade-636006/id_rsa Username:docker}
	I1202 22:32:22.635252  624674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/kubernetes-upgrade-636006/id_rsa Username:docker}
	I1202 22:32:22.743807  624674 ssh_runner.go:195] Run: systemctl --version
	I1202 22:32:22.857949  624674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 22:32:22.911836  624674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:32:22.920037  624674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:32:22.920102  624674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:32:22.928838  624674 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:32:22.928860  624674 start.go:496] detecting cgroup driver to use...
	I1202 22:32:22.928900  624674 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:32:22.928947  624674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 22:32:22.945288  624674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 22:32:22.959835  624674 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:32:22.959893  624674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:32:22.977840  624674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:32:22.992531  624674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:32:23.141425  624674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:32:23.305354  624674 docker.go:234] disabling docker service ...
	I1202 22:32:23.305450  624674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:32:23.326345  624674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:32:23.342285  624674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:32:23.507241  624674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:32:23.651315  624674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:32:23.665761  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:32:23.686211  624674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 22:32:23.686345  624674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.701756  624674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 22:32:23.701881  624674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.716248  624674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.727529  624674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.739519  624674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:32:23.755524  624674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.764586  624674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.773082  624674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:32:23.784545  624674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:32:23.793280  624674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:32:23.804772  624674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:32:23.949255  624674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 22:32:24.143712  624674 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 22:32:24.143777  624674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 22:32:24.148646  624674 start.go:564] Will wait 60s for crictl version
	I1202 22:32:24.148709  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:24.153573  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:32:24.182388  624674 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 22:32:24.182501  624674 ssh_runner.go:195] Run: crio --version
	I1202 22:32:24.220220  624674 ssh_runner.go:195] Run: crio --version
	I1202 22:32:24.258164  624674 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 22:32:24.260974  624674 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-636006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:32:24.287645  624674 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 22:32:24.291365  624674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:32:24.301173  624674 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-636006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-636006 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:32:24.301300  624674 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 22:32:24.301349  624674 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:32:24.349289  624674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:32:24.349320  624674 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:32:24.349374  624674 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:32:24.349592  624674 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:24.349700  624674 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:32:24.349785  624674 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:24.349880  624674 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:24.349976  624674 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:32:24.350073  624674 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:24.350165  624674 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:24.353354  624674 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:24.353739  624674 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:32:24.353894  624674 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:24.354031  624674 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:24.354149  624674 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:32:24.354378  624674 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:24.354527  624674 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:24.354751  624674 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:32:24.734144  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:24.756376  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:24.787642  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 22:32:24.796072  624674 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:32:24.796153  624674 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:24.796216  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:24.826696  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:24.828173  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:32:24.849125  624674 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:32:24.849228  624674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:24.849293  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:24.850523  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:24.886816  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:24.965091  624674 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:32:24.965267  624674 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:32:24.965325  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:24.965223  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:25.002161  624674 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:32:25.002268  624674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:25.002349  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:25.002413  624674 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:32:25.002644  624674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:32:25.002689  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:25.002461  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:25.002524  624674 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:32:25.002759  624674 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:25.002787  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:25.091961  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:25.092015  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:32:25.092134  624674 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:32:25.092167  624674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:25.092200  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:25.104528  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:25.104598  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:25.104636  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:32:25.104686  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:25.252483  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:25.252642  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:32:25.252740  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:32:25.278671  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:32:25.278851  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:25.278958  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:25.279063  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:32:25.402272  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:25.402418  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:32:25.402520  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:32:25.402624  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:32:25.478759  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:32:25.478920  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:32:25.479073  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:32:25.479176  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:32:25.479263  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1202 22:32:25.568108  624674 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:32:25.568292  624674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:32:25.576403  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:32:25.576503  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:32:25.576587  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:32:25.576629  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:32:25.576645  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:32:25.698199  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:32:25.698242  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:32:25.698313  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:32:25.698392  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:32:25.698441  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:32:25.698487  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:32:25.698533  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:32:25.698583  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:32:25.857980  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:32:25.858022  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:32:25.858087  624674 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:32:25.858120  624674 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:32:25.858166  624674 ssh_runner.go:195] Run: which crictl
	I1202 22:32:25.858320  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:32:25.858392  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:32:25.858448  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:32:25.858465  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:32:25.858503  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:32:25.858516  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:32:25.858550  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:32:25.858564  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:32:25.952470  624674 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:32:25.952531  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:32:25.952550  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:32:26.013161  624674 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:32:26.013238  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 22:32:26.186270  624674 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:32:26.186399  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:32:26.550187  624674 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:32:26.550226  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:32:26.550277  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:32:26.583726  624674 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:32:26.583795  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:32:29.081816  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.497994383s)
	I1202 22:32:29.081893  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:32:29.081928  624674 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:32:29.081997  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:32:31.605958  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.523918551s)
	I1202 22:32:31.606049  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:32:31.606085  624674 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:32:31.606158  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:32:33.257522  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.651328921s)
	I1202 22:32:33.257553  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:32:33.257594  624674 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:32:33.257649  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:32:35.136638  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.878950164s)
	I1202 22:32:35.136662  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:32:35.136679  624674 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:32:35.136727  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:32:37.127221  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.990471925s)
	I1202 22:32:37.127250  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:32:37.127273  624674 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:32:37.127321  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:32:39.266101  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.138755645s)
	I1202 22:32:39.266124  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:32:39.266143  624674 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:32:39.266195  624674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:32:40.297966  624674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.031750516s)
	I1202 22:32:40.297988  624674 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-444114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:32:40.298007  624674 cache_images.go:125] Successfully loaded all cached images
	I1202 22:32:40.298012  624674 cache_images.go:94] duration metric: took 15.948679415s to LoadCachedImages
	I1202 22:32:40.298020  624674 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 22:32:40.298121  624674 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-636006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-636006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:32:40.298198  624674 ssh_runner.go:195] Run: crio config
	I1202 22:32:40.418985  624674 cni.go:84] Creating CNI manager for ""
	I1202 22:32:40.419083  624674 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:32:40.419194  624674 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:32:40.419239  624674 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-636006 NodeName:kubernetes-upgrade-636006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:32:40.419402  624674 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-636006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:32:40.419492  624674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:32:40.429040  624674 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:32:40.429146  624674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:32:40.438331  624674 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:32:40.438492  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:32:40.438613  624674 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:32:40.438666  624674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:32:40.438781  624674 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:32:40.438870  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:32:40.465186  624674 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:32:40.465410  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:32:40.465274  624674 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:32:40.465529  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:32:40.465391  624674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:32:40.493985  624674 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:32:40.494034  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:32:41.751853  624674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:32:41.778546  624674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1202 22:32:41.803877  624674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:32:41.843006  624674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1202 22:32:41.881174  624674 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:32:41.887799  624674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:32:41.922910  624674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:32:42.097333  624674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:32:42.120862  624674 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006 for IP: 192.168.76.2
	I1202 22:32:42.120892  624674 certs.go:195] generating shared ca certs ...
	I1202 22:32:42.120918  624674 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:32:42.121083  624674 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 22:32:42.121138  624674 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 22:32:42.121148  624674 certs.go:257] generating profile certs ...
	I1202 22:32:42.121266  624674 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/client.key
	I1202 22:32:42.121346  624674 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/apiserver.key.d66b722d
	I1202 22:32:42.121402  624674 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/proxy-client.key
	I1202 22:32:42.121514  624674 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 22:32:42.121551  624674 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 22:32:42.121568  624674 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:32:42.121604  624674 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 22:32:42.121635  624674 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:32:42.121663  624674 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 22:32:42.121716  624674 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:32:42.122384  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:32:42.165287  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:32:42.204083  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:32:42.263910  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 22:32:42.296002  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 22:32:42.337091  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 22:32:42.358839  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:32:42.381286  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:32:42.414273  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 22:32:42.437181  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 22:32:42.465021  624674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:32:42.488732  624674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:32:42.507313  624674 ssh_runner.go:195] Run: openssl version
	I1202 22:32:42.516187  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 22:32:42.525638  624674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 22:32:42.529496  624674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 22:32:42.529567  624674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 22:32:42.573201  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 22:32:42.582318  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 22:32:42.598644  624674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 22:32:42.605002  624674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 22:32:42.605145  624674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 22:32:42.663404  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:32:42.671585  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:32:42.683952  624674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:32:42.691174  624674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:32:42.691296  624674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:32:42.741317  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:32:42.753583  624674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:32:42.763133  624674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:32:42.814026  624674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:32:42.858649  624674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:32:42.901348  624674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:32:42.951989  624674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:32:43.022321  624674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:32:43.069687  624674 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-636006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-636006 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:32:43.069781  624674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 22:32:43.069849  624674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:32:43.108164  624674 cri.go:89] found id: ""
	I1202 22:32:43.108245  624674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:32:43.116628  624674 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:32:43.116662  624674 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:32:43.116732  624674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:32:43.125173  624674 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:32:43.125850  624674 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-636006" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:32:43.126133  624674 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-444114/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-636006" cluster setting kubeconfig missing "kubernetes-upgrade-636006" context setting]
	I1202 22:32:43.126630  624674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:32:43.127427  624674 kapi.go:59] client config for kubernetes-upgrade-636006: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/kubernetes-upgrade-636006/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:32:43.127999  624674 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 22:32:43.128017  624674 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 22:32:43.128022  624674 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 22:32:43.128026  624674 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 22:32:43.128030  624674 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 22:32:43.128287  624674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:32:43.184026  624674 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 22:31:51.762945208 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 22:32:41.871528298 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-636006"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1202 22:32:43.184052  624674 kubeadm.go:1161] stopping kube-system containers ...
	I1202 22:32:43.184064  624674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 22:32:43.184122  624674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:32:43.241330  624674 cri.go:89] found id: ""
	I1202 22:32:43.241455  624674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 22:32:43.259771  624674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:32:43.280547  624674 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  2 22:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  2 22:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  2 22:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec  2 22:32 /etc/kubernetes/scheduler.conf
	
	I1202 22:32:43.280697  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:32:43.289665  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:32:43.297726  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:32:43.305758  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:32:43.305857  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:32:43.314283  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:32:43.322096  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:32:43.322183  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:32:43.329642  624674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:32:43.339416  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:32:43.404953  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:32:45.081688  624674 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.676697232s)
	I1202 22:32:45.081786  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:32:45.385589  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:32:45.465327  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:32:45.522984  624674 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:32:45.523081  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:46.024171  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:46.523435  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:47.023161  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:47.523516  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:48.023878  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:48.524200  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:49.023826  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:49.524039  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:50.023805  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:50.523823  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:51.023776  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:51.524136  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:52.023948  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:52.523160  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:53.023161  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:53.523268  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:54.023621  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:54.523541  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:55.023906  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:55.523225  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:56.024235  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:56.523293  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:57.023229  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:57.523146  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:58.024152  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:58.523136  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:59.023180  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:59.523262  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:00.029180  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:00.524001  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:01.023570  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:01.523496  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:02.023149  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:02.523344  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:03.023197  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:03.523253  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:04.023702  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:04.523248  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:05.023186  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:05.524210  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:06.023176  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:06.523932  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:07.023758  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:07.523198  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:08.023202  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:08.523173  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:09.023220  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:09.523787  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:10.023250  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:10.523985  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:11.024106  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:11.523215  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:12.023189  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:12.523999  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:13.023853  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:13.523259  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:14.023280  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:14.523233  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:15.024004  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:15.524131  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:16.024116  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:16.523792  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:17.023204  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:17.523792  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:18.024180  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:18.524031  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:19.023851  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:19.524150  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:20.024108  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:20.524170  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:21.023553  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:21.523607  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:22.023206  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:22.523312  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:23.023808  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:23.523145  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:24.023972  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:24.523816  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:25.023371  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:25.523438  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:26.023245  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:26.523204  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:27.024024  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:27.523161  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:28.023612  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:28.523842  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:29.023412  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:29.523256  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:30.023933  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:30.524191  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:31.023960  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:31.524073  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:32.023875  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:32.523233  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:33.023813  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:33.523660  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:34.023238  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:34.523189  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:35.024130  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:35.523203  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:36.023246  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:36.523849  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:37.023275  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:37.523218  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:38.024161  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:38.523658  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:39.023140  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:39.523649  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:40.024211  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:40.523904  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:41.023222  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:41.523495  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:42.023804  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:42.523253  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:43.023672  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:43.523232  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:44.023259  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:44.523242  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:45.029302  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:45.523176  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:45.523269  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:45.550039  624674 cri.go:89] found id: ""
	I1202 22:33:45.550064  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.550072  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:45.550079  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:33:45.550140  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:45.576032  624674 cri.go:89] found id: ""
	I1202 22:33:45.576056  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.576064  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:33:45.576071  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:33:45.576128  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:45.600479  624674 cri.go:89] found id: ""
	I1202 22:33:45.600502  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.600510  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:33:45.600516  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:45.600573  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:45.629079  624674 cri.go:89] found id: ""
	I1202 22:33:45.629101  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.629110  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:45.629116  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:45.629177  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:45.658971  624674 cri.go:89] found id: ""
	I1202 22:33:45.659024  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.659033  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:45.659041  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:45.659103  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:45.701184  624674 cri.go:89] found id: ""
	I1202 22:33:45.701206  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.701214  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:45.701221  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:45.701276  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:45.731307  624674 cri.go:89] found id: ""
	I1202 22:33:45.731390  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.731407  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:45.731415  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:33:45.731487  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:33:45.759318  624674 cri.go:89] found id: ""
	I1202 22:33:45.759385  624674 logs.go:282] 0 containers: []
	W1202 22:33:45.759410  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:33:45.759432  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:45.759460  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:45.832692  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:45.832732  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:45.850526  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:45.850557  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:45.921208  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:45.921273  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:33:45.921295  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:33:45.961716  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:33:45.961749  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:48.491837  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:48.502856  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:48.502933  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:48.530369  624674 cri.go:89] found id: ""
	I1202 22:33:48.530395  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.530404  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:48.530410  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:33:48.530470  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:48.556413  624674 cri.go:89] found id: ""
	I1202 22:33:48.556436  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.556445  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:33:48.556452  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:33:48.556510  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:48.585688  624674 cri.go:89] found id: ""
	I1202 22:33:48.585763  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.585787  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:33:48.585809  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:48.585897  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:48.611058  624674 cri.go:89] found id: ""
	I1202 22:33:48.611131  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.611154  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:48.611172  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:48.611256  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:48.636026  624674 cri.go:89] found id: ""
	I1202 22:33:48.636049  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.636058  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:48.636064  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:48.636143  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:48.663041  624674 cri.go:89] found id: ""
	I1202 22:33:48.663073  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.663082  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:48.663089  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:48.663153  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:48.693455  624674 cri.go:89] found id: ""
	I1202 22:33:48.693483  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.693491  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:48.693497  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:33:48.693555  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:33:48.726007  624674 cri.go:89] found id: ""
	I1202 22:33:48.726035  624674 logs.go:282] 0 containers: []
	W1202 22:33:48.726043  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:33:48.726052  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:48.726067  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:48.797202  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:48.797241  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:48.814773  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:48.814801  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:48.881901  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:48.881969  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:33:48.881999  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:33:48.925718  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:33:48.925752  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:51.453575  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:51.463671  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:51.463779  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:51.493447  624674 cri.go:89] found id: ""
	I1202 22:33:51.493525  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.493550  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:51.493563  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:33:51.493638  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:51.518716  624674 cri.go:89] found id: ""
	I1202 22:33:51.518745  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.518753  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:33:51.518761  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:33:51.518823  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:51.544124  624674 cri.go:89] found id: ""
	I1202 22:33:51.544201  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.544226  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:33:51.544244  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:51.544328  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:51.573269  624674 cri.go:89] found id: ""
	I1202 22:33:51.573337  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.573359  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:51.573376  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:51.573456  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:51.598769  624674 cri.go:89] found id: ""
	I1202 22:33:51.598796  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.598817  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:51.598824  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:51.598893  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:51.626645  624674 cri.go:89] found id: ""
	I1202 22:33:51.626668  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.626678  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:51.626705  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:51.626780  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:51.651493  624674 cri.go:89] found id: ""
	I1202 22:33:51.651516  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.651525  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:51.651532  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:33:51.651589  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:33:51.697469  624674 cri.go:89] found id: ""
	I1202 22:33:51.697503  624674 logs.go:282] 0 containers: []
	W1202 22:33:51.697519  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:33:51.697528  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:51.697540  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:51.782082  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:51.782118  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:51.798809  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:51.798838  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:51.865271  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:51.865296  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:33:51.865309  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:33:51.904816  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:33:51.904849  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:54.433851  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:54.444356  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:54.444440  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:54.470022  624674 cri.go:89] found id: ""
	I1202 22:33:54.470047  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.470056  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:54.470063  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:33:54.470123  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:54.499363  624674 cri.go:89] found id: ""
	I1202 22:33:54.499391  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.499401  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:33:54.499409  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:33:54.499470  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:54.528842  624674 cri.go:89] found id: ""
	I1202 22:33:54.528868  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.528877  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:33:54.528884  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:54.528946  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:54.560401  624674 cri.go:89] found id: ""
	I1202 22:33:54.560427  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.560436  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:54.560443  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:54.560503  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:54.586168  624674 cri.go:89] found id: ""
	I1202 22:33:54.586192  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.586201  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:54.586207  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:54.586269  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:54.610368  624674 cri.go:89] found id: ""
	I1202 22:33:54.610396  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.610405  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:54.610412  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:54.610471  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:54.638429  624674 cri.go:89] found id: ""
	I1202 22:33:54.638454  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.638463  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:54.638470  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:33:54.638533  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:33:54.664836  624674 cri.go:89] found id: ""
	I1202 22:33:54.664923  624674 logs.go:282] 0 containers: []
	W1202 22:33:54.664947  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:33:54.665006  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:54.665041  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:54.744483  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:54.744524  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:54.761128  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:54.761155  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:54.821902  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:54.821970  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:33:54.821988  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:33:54.862035  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:33:54.862069  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:57.397203  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:57.407217  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:57.407283  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:57.437309  624674 cri.go:89] found id: ""
	I1202 22:33:57.437334  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.437343  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:57.437349  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:33:57.437406  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:57.463511  624674 cri.go:89] found id: ""
	I1202 22:33:57.463533  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.463541  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:33:57.463550  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:33:57.463607  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:57.494683  624674 cri.go:89] found id: ""
	I1202 22:33:57.494707  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.494716  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:33:57.494722  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:57.494778  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:57.520477  624674 cri.go:89] found id: ""
	I1202 22:33:57.520500  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.520509  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:57.520516  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:57.520573  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:57.545811  624674 cri.go:89] found id: ""
	I1202 22:33:57.545834  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.545843  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:57.545850  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:57.545911  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:57.571941  624674 cri.go:89] found id: ""
	I1202 22:33:57.571971  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.571980  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:57.571993  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:57.572058  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:57.601228  624674 cri.go:89] found id: ""
	I1202 22:33:57.601251  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.601260  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:57.601266  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:33:57.601329  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:33:57.626482  624674 cri.go:89] found id: ""
	I1202 22:33:57.626509  624674 logs.go:282] 0 containers: []
	W1202 22:33:57.626518  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:33:57.626528  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:57.626540  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:57.642580  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:57.642653  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:57.726962  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:57.727058  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:33:57.727088  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:33:57.769645  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:33:57.769727  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:57.812816  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:57.812894  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:00.391353  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:00.404547  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:00.404629  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:00.449479  624674 cri.go:89] found id: ""
	I1202 22:34:00.449504  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.449513  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:00.449520  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:00.449587  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:00.483105  624674 cri.go:89] found id: ""
	I1202 22:34:00.483128  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.483137  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:00.483145  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:00.483207  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:00.531545  624674 cri.go:89] found id: ""
	I1202 22:34:00.531567  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.531576  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:00.531583  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:00.531640  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:00.559605  624674 cri.go:89] found id: ""
	I1202 22:34:00.559627  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.559636  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:00.559643  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:00.559706  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:00.587881  624674 cri.go:89] found id: ""
	I1202 22:34:00.587903  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.587911  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:00.587918  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:00.587990  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:00.616335  624674 cri.go:89] found id: ""
	I1202 22:34:00.616357  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.616366  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:00.616372  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:00.616430  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:00.643569  624674 cri.go:89] found id: ""
	I1202 22:34:00.643645  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.643667  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:00.643686  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:00.643770  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:00.685541  624674 cri.go:89] found id: ""
	I1202 22:34:00.685619  624674 logs.go:282] 0 containers: []
	W1202 22:34:00.685641  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:00.685662  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:00.685701  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:00.790715  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:00.790806  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:00.817289  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:00.817367  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:00.899282  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:00.899343  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:00.899377  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:00.948780  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:00.948859  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:03.490632  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:03.500767  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:03.500843  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:03.526573  624674 cri.go:89] found id: ""
	I1202 22:34:03.526596  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.526604  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:03.526611  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:03.526669  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:03.552489  624674 cri.go:89] found id: ""
	I1202 22:34:03.552511  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.552519  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:03.552526  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:03.552588  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:03.576752  624674 cri.go:89] found id: ""
	I1202 22:34:03.576775  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.576783  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:03.576790  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:03.576851  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:03.601474  624674 cri.go:89] found id: ""
	I1202 22:34:03.601496  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.601505  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:03.601512  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:03.601573  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:03.628170  624674 cri.go:89] found id: ""
	I1202 22:34:03.628193  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.628202  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:03.628209  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:03.628304  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:03.654811  624674 cri.go:89] found id: ""
	I1202 22:34:03.654835  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.654844  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:03.654850  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:03.654910  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:03.686565  624674 cri.go:89] found id: ""
	I1202 22:34:03.686591  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.686600  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:03.686612  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:03.686672  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:03.716776  624674 cri.go:89] found id: ""
	I1202 22:34:03.716800  624674 logs.go:282] 0 containers: []
	W1202 22:34:03.716809  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:03.716817  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:03.716829  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:03.760306  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:03.760345  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:03.789290  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:03.789358  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:03.857599  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:03.857637  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:03.874639  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:03.874669  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:03.946138  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:06.446391  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:06.456277  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:06.456350  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:06.480270  624674 cri.go:89] found id: ""
	I1202 22:34:06.480292  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.480301  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:06.480308  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:06.480373  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:06.505985  624674 cri.go:89] found id: ""
	I1202 22:34:06.506008  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.506017  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:06.506023  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:06.506083  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:06.531790  624674 cri.go:89] found id: ""
	I1202 22:34:06.531812  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.531820  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:06.531826  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:06.531884  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:06.562583  624674 cri.go:89] found id: ""
	I1202 22:34:06.562606  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.562614  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:06.562620  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:06.562679  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:06.590899  624674 cri.go:89] found id: ""
	I1202 22:34:06.590923  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.590932  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:06.590939  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:06.591033  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:06.618303  624674 cri.go:89] found id: ""
	I1202 22:34:06.618330  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.618338  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:06.618345  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:06.618405  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:06.644078  624674 cri.go:89] found id: ""
	I1202 22:34:06.644145  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.644160  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:06.644168  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:06.644226  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:06.683158  624674 cri.go:89] found id: ""
	I1202 22:34:06.683186  624674 logs.go:282] 0 containers: []
	W1202 22:34:06.683195  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:06.683204  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:06.683216  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:06.724179  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:06.724211  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:06.796528  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:06.796568  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:06.815435  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:06.815519  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:06.891719  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:06.891741  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:06.891754  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:09.432792  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:09.443213  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:09.443359  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:09.468712  624674 cri.go:89] found id: ""
	I1202 22:34:09.468780  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.468804  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:09.468822  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:09.468910  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:09.494473  624674 cri.go:89] found id: ""
	I1202 22:34:09.494505  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.494514  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:09.494520  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:09.494601  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:09.520921  624674 cri.go:89] found id: ""
	I1202 22:34:09.520943  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.520951  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:09.520957  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:09.521023  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:09.548244  624674 cri.go:89] found id: ""
	I1202 22:34:09.548270  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.548278  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:09.548285  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:09.548347  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:09.580070  624674 cri.go:89] found id: ""
	I1202 22:34:09.580098  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.580107  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:09.580114  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:09.580183  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:09.609849  624674 cri.go:89] found id: ""
	I1202 22:34:09.609872  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.609880  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:09.609887  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:09.609945  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:09.638137  624674 cri.go:89] found id: ""
	I1202 22:34:09.638204  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.638225  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:09.638245  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:09.638332  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:09.672096  624674 cri.go:89] found id: ""
	I1202 22:34:09.672162  624674 logs.go:282] 0 containers: []
	W1202 22:34:09.672184  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:09.672207  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:09.672241  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:09.750076  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:09.750116  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:09.766811  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:09.766903  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:09.835044  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:09.835067  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:09.835081  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:09.880954  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:09.881038  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.416034  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:12.427248  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:12.427317  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:12.452636  624674 cri.go:89] found id: ""
	I1202 22:34:12.452659  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.452668  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:12.452674  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:12.452733  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:12.477078  624674 cri.go:89] found id: ""
	I1202 22:34:12.477104  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.477112  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:12.477119  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:12.477178  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:12.503889  624674 cri.go:89] found id: ""
	I1202 22:34:12.503917  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.503926  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:12.503933  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:12.503993  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:12.531399  624674 cri.go:89] found id: ""
	I1202 22:34:12.531425  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.531434  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:12.531440  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:12.531498  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:12.556800  624674 cri.go:89] found id: ""
	I1202 22:34:12.556829  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.556838  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:12.556844  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:12.556902  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:12.583708  624674 cri.go:89] found id: ""
	I1202 22:34:12.583735  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.583744  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:12.583751  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:12.583812  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:12.609297  624674 cri.go:89] found id: ""
	I1202 22:34:12.609320  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.609329  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:12.609336  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:12.609397  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:12.635503  624674 cri.go:89] found id: ""
	I1202 22:34:12.635530  624674 logs.go:282] 0 containers: []
	W1202 22:34:12.635540  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:12.635551  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:12.635563  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.680280  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:12.680310  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:12.759190  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:12.759225  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:12.775502  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:12.775531  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:12.840361  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:12.840426  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:12.840446  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:15.382853  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:15.392678  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:15.392747  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:15.426018  624674 cri.go:89] found id: ""
	I1202 22:34:15.426041  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.426050  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:15.426056  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:15.426121  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:15.458291  624674 cri.go:89] found id: ""
	I1202 22:34:15.458313  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.458322  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:15.458328  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:15.458386  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:15.489917  624674 cri.go:89] found id: ""
	I1202 22:34:15.489940  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.489948  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:15.489954  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:15.490011  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:15.515612  624674 cri.go:89] found id: ""
	I1202 22:34:15.515635  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.515643  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:15.515650  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:15.515714  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:15.541083  624674 cri.go:89] found id: ""
	I1202 22:34:15.541109  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.541117  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:15.541125  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:15.541186  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:15.570213  624674 cri.go:89] found id: ""
	I1202 22:34:15.570248  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.570257  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:15.570265  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:15.570323  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:15.598044  624674 cri.go:89] found id: ""
	I1202 22:34:15.598068  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.598076  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:15.598082  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:15.598143  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:15.627948  624674 cri.go:89] found id: ""
	I1202 22:34:15.627971  624674 logs.go:282] 0 containers: []
	W1202 22:34:15.627980  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:15.627988  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:15.628000  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:15.696693  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:15.696784  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:15.714427  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:15.714453  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:15.780481  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:15.780504  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:15.780516  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:15.821396  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:15.821434  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:18.352433  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:18.362358  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:18.362431  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:18.387927  624674 cri.go:89] found id: ""
	I1202 22:34:18.387951  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.387959  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:18.387965  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:18.388026  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:18.412810  624674 cri.go:89] found id: ""
	I1202 22:34:18.412834  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.412843  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:18.412850  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:18.412910  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:18.445349  624674 cri.go:89] found id: ""
	I1202 22:34:18.445375  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.445383  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:18.445390  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:18.445448  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:18.470959  624674 cri.go:89] found id: ""
	I1202 22:34:18.470984  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.470993  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:18.471022  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:18.471079  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:18.496178  624674 cri.go:89] found id: ""
	I1202 22:34:18.496203  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.496212  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:18.496219  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:18.496297  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:18.521228  624674 cri.go:89] found id: ""
	I1202 22:34:18.521254  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.521263  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:18.521270  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:18.521347  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:18.548266  624674 cri.go:89] found id: ""
	I1202 22:34:18.548290  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.548298  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:18.548305  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:18.548361  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:18.572694  624674 cri.go:89] found id: ""
	I1202 22:34:18.572719  624674 logs.go:282] 0 containers: []
	W1202 22:34:18.572728  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:18.572743  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:18.572755  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:18.637115  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:18.637134  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:18.637147  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:18.678243  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:18.678364  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:18.713251  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:18.713275  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:18.791399  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:18.791436  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:21.309547  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:21.319932  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:21.320006  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:21.365441  624674 cri.go:89] found id: ""
	I1202 22:34:21.365468  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.365477  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:21.365483  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:21.365542  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:21.412755  624674 cri.go:89] found id: ""
	I1202 22:34:21.412777  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.412785  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:21.412791  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:21.412855  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:21.445280  624674 cri.go:89] found id: ""
	I1202 22:34:21.445302  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.445310  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:21.445317  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:21.445374  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:21.471666  624674 cri.go:89] found id: ""
	I1202 22:34:21.471729  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.471754  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:21.471773  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:21.471852  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:21.505993  624674 cri.go:89] found id: ""
	I1202 22:34:21.506057  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.506080  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:21.506097  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:21.506180  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:21.540553  624674 cri.go:89] found id: ""
	I1202 22:34:21.540618  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.540643  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:21.540663  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:21.540744  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:21.568105  624674 cri.go:89] found id: ""
	I1202 22:34:21.568169  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.568190  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:21.568207  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:21.568290  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:21.595400  624674 cri.go:89] found id: ""
	I1202 22:34:21.595476  624674 logs.go:282] 0 containers: []
	W1202 22:34:21.595498  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:21.595519  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:21.595555  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:21.612651  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:21.612729  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:21.734194  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:21.734240  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:21.734267  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:21.796794  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:21.796831  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:21.841673  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:21.841705  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:24.436517  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:24.446671  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:24.446740  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:24.472580  624674 cri.go:89] found id: ""
	I1202 22:34:24.472608  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.472617  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:24.472623  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:24.472682  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:24.498310  624674 cri.go:89] found id: ""
	I1202 22:34:24.498344  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.498353  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:24.498359  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:24.498439  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:24.524620  624674 cri.go:89] found id: ""
	I1202 22:34:24.524695  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.524721  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:24.524737  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:24.524818  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:24.549880  624674 cri.go:89] found id: ""
	I1202 22:34:24.549907  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.549916  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:24.549924  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:24.549983  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:24.576501  624674 cri.go:89] found id: ""
	I1202 22:34:24.576528  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.576538  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:24.576545  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:24.576609  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:24.603315  624674 cri.go:89] found id: ""
	I1202 22:34:24.603342  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.603351  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:24.603357  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:24.603419  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:24.628849  624674 cri.go:89] found id: ""
	I1202 22:34:24.628873  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.628881  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:24.628888  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:24.628946  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:24.656484  624674 cri.go:89] found id: ""
	I1202 22:34:24.656511  624674 logs.go:282] 0 containers: []
	W1202 22:34:24.656520  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:24.656530  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:24.656541  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:24.744350  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:24.744383  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:24.761885  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:24.761917  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:24.828000  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:24.828061  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:24.828078  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:24.868073  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:24.868165  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:27.398066  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:27.426292  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:27.426485  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:27.491401  624674 cri.go:89] found id: ""
	I1202 22:34:27.491507  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.491524  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:27.491531  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:27.491676  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:27.558571  624674 cri.go:89] found id: ""
	I1202 22:34:27.558597  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.558640  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:27.558696  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:27.558856  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:27.624669  624674 cri.go:89] found id: ""
	I1202 22:34:27.624698  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.624752  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:27.624803  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:27.625111  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:27.695823  624674 cri.go:89] found id: ""
	I1202 22:34:27.696028  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.696101  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:27.696126  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:27.696411  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:27.787234  624674 cri.go:89] found id: ""
	I1202 22:34:27.787292  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.787300  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:27.787313  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:27.787482  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:27.867370  624674 cri.go:89] found id: ""
	I1202 22:34:27.867441  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.867450  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:27.867457  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:27.867598  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:27.935482  624674 cri.go:89] found id: ""
	I1202 22:34:27.935567  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.935585  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:27.935592  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:27.935744  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:27.996792  624674 cri.go:89] found id: ""
	I1202 22:34:27.996872  624674 logs.go:282] 0 containers: []
	W1202 22:34:27.996922  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:27.996933  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:27.996957  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:28.042412  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:28.042463  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:28.223558  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:28.223591  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:28.223615  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:28.291011  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:28.291128  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:28.369272  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:28.369478  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:30.971128  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:30.981546  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:30.981613  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:31.015908  624674 cri.go:89] found id: ""
	I1202 22:34:31.015937  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.015946  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:31.015953  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:31.016015  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:31.043477  624674 cri.go:89] found id: ""
	I1202 22:34:31.043512  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.043521  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:31.043529  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:31.043589  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:31.074198  624674 cri.go:89] found id: ""
	I1202 22:34:31.074229  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.074237  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:31.074245  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:31.074308  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:31.101545  624674 cri.go:89] found id: ""
	I1202 22:34:31.101571  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.101580  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:31.101587  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:31.101647  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:31.129007  624674 cri.go:89] found id: ""
	I1202 22:34:31.129040  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.129049  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:31.129056  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:31.129117  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:31.159529  624674 cri.go:89] found id: ""
	I1202 22:34:31.159558  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.159567  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:31.159574  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:31.159634  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:31.187620  624674 cri.go:89] found id: ""
	I1202 22:34:31.187653  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.187663  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:31.187670  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:31.187732  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:31.212918  624674 cri.go:89] found id: ""
	I1202 22:34:31.212945  624674 logs.go:282] 0 containers: []
	W1202 22:34:31.212954  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:31.212965  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:31.212976  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:31.281087  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:31.281122  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:31.298582  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:31.298612  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:31.367762  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:31.367780  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:31.367793  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:31.408139  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:31.408172  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:33.940076  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:33.950971  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:33.951086  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:33.982736  624674 cri.go:89] found id: ""
	I1202 22:34:33.982763  624674 logs.go:282] 0 containers: []
	W1202 22:34:33.982772  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:33.982779  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:33.982870  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:34.026147  624674 cri.go:89] found id: ""
	I1202 22:34:34.026173  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.026182  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:34.026189  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:34.026268  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:34.053654  624674 cri.go:89] found id: ""
	I1202 22:34:34.053682  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.053691  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:34.053698  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:34.053781  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:34.080485  624674 cri.go:89] found id: ""
	I1202 22:34:34.080566  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.080583  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:34.080591  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:34.080669  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:34.107643  624674 cri.go:89] found id: ""
	I1202 22:34:34.107721  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.107743  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:34.107764  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:34.107855  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:34.133907  624674 cri.go:89] found id: ""
	I1202 22:34:34.133972  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.133994  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:34.134013  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:34.134106  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:34.160722  624674 cri.go:89] found id: ""
	I1202 22:34:34.160749  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.160758  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:34.160764  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:34.160825  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:34.186451  624674 cri.go:89] found id: ""
	I1202 22:34:34.186477  624674 logs.go:282] 0 containers: []
	W1202 22:34:34.186486  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:34.186495  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:34.186506  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:34.254034  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:34.254072  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:34.270313  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:34.270341  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:34.335050  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:34.335071  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:34.335087  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:34.374942  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:34.374979  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:36.904599  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:36.914694  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:36.914763  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:36.949328  624674 cri.go:89] found id: ""
	I1202 22:34:36.949350  624674 logs.go:282] 0 containers: []
	W1202 22:34:36.949358  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:36.949365  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:36.949424  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:36.976292  624674 cri.go:89] found id: ""
	I1202 22:34:36.976314  624674 logs.go:282] 0 containers: []
	W1202 22:34:36.976323  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:36.976330  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:36.976393  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:37.005849  624674 cri.go:89] found id: ""
	I1202 22:34:37.005876  624674 logs.go:282] 0 containers: []
	W1202 22:34:37.005886  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:37.005893  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:37.005967  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:37.039524  624674 cri.go:89] found id: ""
	I1202 22:34:37.039551  624674 logs.go:282] 0 containers: []
	W1202 22:34:37.039561  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:37.039567  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:37.039626  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:37.065670  624674 cri.go:89] found id: ""
	I1202 22:34:37.065696  624674 logs.go:282] 0 containers: []
	W1202 22:34:37.065705  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:37.065711  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:37.065769  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:37.091187  624674 cri.go:89] found id: ""
	I1202 22:34:37.091209  624674 logs.go:282] 0 containers: []
	W1202 22:34:37.091218  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:37.091225  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:37.091294  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:37.120570  624674 cri.go:89] found id: ""
	I1202 22:34:37.120597  624674 logs.go:282] 0 containers: []
	W1202 22:34:37.120606  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:37.120612  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:37.120672  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:37.146489  624674 cri.go:89] found id: ""
	I1202 22:34:37.146516  624674 logs.go:282] 0 containers: []
	W1202 22:34:37.146524  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:37.146533  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:37.146544  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:37.177124  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:37.177152  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:37.244336  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:37.244374  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:37.260317  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:37.260343  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:37.330456  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:37.330477  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:37.330490  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:39.875150  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:39.884855  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:39.884932  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:39.912986  624674 cri.go:89] found id: ""
	I1202 22:34:39.913008  624674 logs.go:282] 0 containers: []
	W1202 22:34:39.913017  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:39.913023  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:39.913097  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:39.948576  624674 cri.go:89] found id: ""
	I1202 22:34:39.948597  624674 logs.go:282] 0 containers: []
	W1202 22:34:39.948606  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:39.948613  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:39.948669  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:39.976952  624674 cri.go:89] found id: ""
	I1202 22:34:39.976976  624674 logs.go:282] 0 containers: []
	W1202 22:34:39.976984  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:39.976991  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:39.977057  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:40.003524  624674 cri.go:89] found id: ""
	I1202 22:34:40.003552  624674 logs.go:282] 0 containers: []
	W1202 22:34:40.004147  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:40.004163  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:40.004330  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:40.043480  624674 cri.go:89] found id: ""
	I1202 22:34:40.043514  624674 logs.go:282] 0 containers: []
	W1202 22:34:40.043532  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:40.043540  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:40.043651  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:40.071381  624674 cri.go:89] found id: ""
	I1202 22:34:40.071409  624674 logs.go:282] 0 containers: []
	W1202 22:34:40.071418  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:40.071425  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:40.071485  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:40.098099  624674 cri.go:89] found id: ""
	I1202 22:34:40.098124  624674 logs.go:282] 0 containers: []
	W1202 22:34:40.098134  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:40.098141  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:40.098200  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:40.125390  624674 cri.go:89] found id: ""
	I1202 22:34:40.125414  624674 logs.go:282] 0 containers: []
	W1202 22:34:40.125423  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:40.125432  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:40.125443  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:40.194467  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:40.194510  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:40.212392  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:40.212421  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:40.281850  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:40.281875  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:40.281889  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:40.322372  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:40.322405  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:42.855209  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:42.865202  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:42.865273  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:42.892196  624674 cri.go:89] found id: ""
	I1202 22:34:42.892219  624674 logs.go:282] 0 containers: []
	W1202 22:34:42.892227  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:42.892234  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:42.892290  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:42.927898  624674 cri.go:89] found id: ""
	I1202 22:34:42.927921  624674 logs.go:282] 0 containers: []
	W1202 22:34:42.927929  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:42.927936  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:42.927999  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:42.958739  624674 cri.go:89] found id: ""
	I1202 22:34:42.958762  624674 logs.go:282] 0 containers: []
	W1202 22:34:42.958772  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:42.958779  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:42.958842  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:42.991509  624674 cri.go:89] found id: ""
	I1202 22:34:42.991535  624674 logs.go:282] 0 containers: []
	W1202 22:34:42.991544  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:42.991550  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:42.991608  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:43.019270  624674 cri.go:89] found id: ""
	I1202 22:34:43.019297  624674 logs.go:282] 0 containers: []
	W1202 22:34:43.019306  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:43.019312  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:43.019371  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:43.045439  624674 cri.go:89] found id: ""
	I1202 22:34:43.045465  624674 logs.go:282] 0 containers: []
	W1202 22:34:43.045474  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:43.045480  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:43.045538  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:43.072152  624674 cri.go:89] found id: ""
	I1202 22:34:43.072178  624674 logs.go:282] 0 containers: []
	W1202 22:34:43.072189  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:43.072195  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:43.072252  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:43.097643  624674 cri.go:89] found id: ""
	I1202 22:34:43.097668  624674 logs.go:282] 0 containers: []
	W1202 22:34:43.097677  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:43.097686  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:43.097697  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:43.165045  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:43.165078  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:43.181146  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:43.181174  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:43.246572  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:43.246594  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:43.246608  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:43.286361  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:43.286395  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:45.816074  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:45.826490  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:45.826560  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:45.853679  624674 cri.go:89] found id: ""
	I1202 22:34:45.853704  624674 logs.go:282] 0 containers: []
	W1202 22:34:45.853713  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:45.853720  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:45.853777  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:45.880377  624674 cri.go:89] found id: ""
	I1202 22:34:45.880403  624674 logs.go:282] 0 containers: []
	W1202 22:34:45.880413  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:45.880419  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:45.880481  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:45.909354  624674 cri.go:89] found id: ""
	I1202 22:34:45.909379  624674 logs.go:282] 0 containers: []
	W1202 22:34:45.909388  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:45.909395  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:45.909456  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:45.952961  624674 cri.go:89] found id: ""
	I1202 22:34:45.952985  624674 logs.go:282] 0 containers: []
	W1202 22:34:45.952994  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:45.953000  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:45.953074  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:45.980157  624674 cri.go:89] found id: ""
	I1202 22:34:45.980183  624674 logs.go:282] 0 containers: []
	W1202 22:34:45.980192  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:45.980198  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:45.980254  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:46.008277  624674 cri.go:89] found id: ""
	I1202 22:34:46.008308  624674 logs.go:282] 0 containers: []
	W1202 22:34:46.008318  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:46.008326  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:46.008396  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:46.040549  624674 cri.go:89] found id: ""
	I1202 22:34:46.040575  624674 logs.go:282] 0 containers: []
	W1202 22:34:46.040583  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:46.040590  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:46.040675  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:46.066756  624674 cri.go:89] found id: ""
	I1202 22:34:46.066781  624674 logs.go:282] 0 containers: []
	W1202 22:34:46.066790  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:46.066798  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:46.066818  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:46.135599  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:46.135638  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:46.152505  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:46.152537  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:46.224300  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:46.224322  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:46.224336  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:46.265279  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:46.265314  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:48.794649  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:48.804776  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:48.804851  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:48.834927  624674 cri.go:89] found id: ""
	I1202 22:34:48.834953  624674 logs.go:282] 0 containers: []
	W1202 22:34:48.834961  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:48.834968  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:48.835095  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:48.860839  624674 cri.go:89] found id: ""
	I1202 22:34:48.860904  624674 logs.go:282] 0 containers: []
	W1202 22:34:48.860920  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:48.860927  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:48.860988  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:48.887382  624674 cri.go:89] found id: ""
	I1202 22:34:48.887407  624674 logs.go:282] 0 containers: []
	W1202 22:34:48.887415  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:48.887422  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:48.887479  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:48.917497  624674 cri.go:89] found id: ""
	I1202 22:34:48.917520  624674 logs.go:282] 0 containers: []
	W1202 22:34:48.917529  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:48.917535  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:48.917595  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:48.953403  624674 cri.go:89] found id: ""
	I1202 22:34:48.953428  624674 logs.go:282] 0 containers: []
	W1202 22:34:48.953437  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:48.953444  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:48.953502  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:48.984162  624674 cri.go:89] found id: ""
	I1202 22:34:48.984191  624674 logs.go:282] 0 containers: []
	W1202 22:34:48.984200  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:48.984207  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:48.984282  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:49.014464  624674 cri.go:89] found id: ""
	I1202 22:34:49.014489  624674 logs.go:282] 0 containers: []
	W1202 22:34:49.014498  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:49.014505  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:49.014598  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:49.040815  624674 cri.go:89] found id: ""
	I1202 22:34:49.040890  624674 logs.go:282] 0 containers: []
	W1202 22:34:49.040905  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:49.040915  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:49.040927  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:49.108048  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:49.108086  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:49.124682  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:49.124709  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:49.192592  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:49.192661  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:49.192680  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:49.235330  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:49.235371  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:51.764775  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:51.775716  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:51.775789  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:51.800440  624674 cri.go:89] found id: ""
	I1202 22:34:51.800465  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.800477  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:51.800484  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:51.800543  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:51.826013  624674 cri.go:89] found id: ""
	I1202 22:34:51.826040  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.826049  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:51.826055  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:51.826114  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:51.851231  624674 cri.go:89] found id: ""
	I1202 22:34:51.851254  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.851263  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:51.851270  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:51.851331  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:51.880761  624674 cri.go:89] found id: ""
	I1202 22:34:51.880786  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.880795  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:51.880802  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:51.880864  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:51.907183  624674 cri.go:89] found id: ""
	I1202 22:34:51.907208  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.907217  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:51.907224  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:51.907286  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:51.946646  624674 cri.go:89] found id: ""
	I1202 22:34:51.946673  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.946682  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:51.946689  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:51.946752  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:51.974968  624674 cri.go:89] found id: ""
	I1202 22:34:51.975005  624674 logs.go:282] 0 containers: []
	W1202 22:34:51.975016  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:51.975022  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:51.975081  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:52.004711  624674 cri.go:89] found id: ""
	I1202 22:34:52.004796  624674 logs.go:282] 0 containers: []
	W1202 22:34:52.004820  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:52.004844  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:52.004883  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:52.038614  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:52.038643  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:52.107933  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:52.107970  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:52.125904  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:52.125990  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:52.191236  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:52.191259  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:52.191271  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:54.732825  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:54.744711  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:54.744786  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:54.773701  624674 cri.go:89] found id: ""
	I1202 22:34:54.773726  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.773735  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:54.773741  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:54.773798  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:54.798644  624674 cri.go:89] found id: ""
	I1202 22:34:54.798669  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.798678  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:54.798689  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:54.798749  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:54.823276  624674 cri.go:89] found id: ""
	I1202 22:34:54.823300  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.823309  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:54.823316  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:54.823371  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:54.848217  624674 cri.go:89] found id: ""
	I1202 22:34:54.848242  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.848250  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:54.848258  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:54.848316  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:54.872988  624674 cri.go:89] found id: ""
	I1202 22:34:54.873022  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.873031  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:54.873037  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:54.873110  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:54.902219  624674 cri.go:89] found id: ""
	I1202 22:34:54.902290  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.902315  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:54.902333  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:54.902411  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:54.933116  624674 cri.go:89] found id: ""
	I1202 22:34:54.933156  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.933165  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:54.933171  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:54.933275  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:54.962350  624674 cri.go:89] found id: ""
	I1202 22:34:54.962415  624674 logs.go:282] 0 containers: []
	W1202 22:34:54.962438  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:54.962459  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:54.962488  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:55.032952  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:55.032978  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:55.032991  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:55.075005  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:55.075044  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:55.104731  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:55.104761  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:55.177855  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:55.177890  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:57.695156  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:57.704895  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:57.704968  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:57.729361  624674 cri.go:89] found id: ""
	I1202 22:34:57.729385  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.729394  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:57.729401  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:34:57.729460  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:57.758047  624674 cri.go:89] found id: ""
	I1202 22:34:57.758069  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.758077  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:34:57.758087  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:34:57.758145  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:57.782916  624674 cri.go:89] found id: ""
	I1202 22:34:57.782940  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.782948  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:34:57.782955  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:57.783045  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:57.810126  624674 cri.go:89] found id: ""
	I1202 22:34:57.810151  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.810164  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:57.810171  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:57.810228  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:57.835657  624674 cri.go:89] found id: ""
	I1202 22:34:57.835684  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.835694  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:57.835701  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:57.835767  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:57.862262  624674 cri.go:89] found id: ""
	I1202 22:34:57.862286  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.862294  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:57.862301  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:57.862360  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:57.887469  624674 cri.go:89] found id: ""
	I1202 22:34:57.887494  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.887502  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:57.887509  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:34:57.887598  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:34:57.915199  624674 cri.go:89] found id: ""
	I1202 22:34:57.915227  624674 logs.go:282] 0 containers: []
	W1202 22:34:57.915235  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:34:57.915244  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:57.915256  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:57.932933  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:57.932963  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:58.011589  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:58.011611  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:34:58.011625  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:34:58.053433  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:34:58.053470  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:58.081841  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:58.081878  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:00.656166  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:00.667127  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:00.667203  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:00.694391  624674 cri.go:89] found id: ""
	I1202 22:35:00.694423  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.694433  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:00.694440  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:00.694512  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:00.721280  624674 cri.go:89] found id: ""
	I1202 22:35:00.721314  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.721323  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:00.721330  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:00.721397  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:00.758082  624674 cri.go:89] found id: ""
	I1202 22:35:00.758119  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.758127  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:00.758134  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:00.758203  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:00.798386  624674 cri.go:89] found id: ""
	I1202 22:35:00.798408  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.798417  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:00.798424  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:00.798495  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:00.834240  624674 cri.go:89] found id: ""
	I1202 22:35:00.834277  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.834287  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:00.834294  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:00.834365  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:00.872566  624674 cri.go:89] found id: ""
	I1202 22:35:00.872603  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.872612  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:00.872619  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:00.872688  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:00.902593  624674 cri.go:89] found id: ""
	I1202 22:35:00.902643  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.902652  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:00.902659  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:00.902733  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:00.947239  624674 cri.go:89] found id: ""
	I1202 22:35:00.947269  624674 logs.go:282] 0 containers: []
	W1202 22:35:00.947277  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:00.947286  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:00.947297  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:01.031618  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:01.031654  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:01.064176  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:01.064203  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:01.166405  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:01.166435  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:01.166450  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:01.256394  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:01.256442  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:03.792906  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:03.803058  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:03.803143  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:03.831125  624674 cri.go:89] found id: ""
	I1202 22:35:03.831152  624674 logs.go:282] 0 containers: []
	W1202 22:35:03.831161  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:03.831167  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:03.831224  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:03.858820  624674 cri.go:89] found id: ""
	I1202 22:35:03.858848  624674 logs.go:282] 0 containers: []
	W1202 22:35:03.858857  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:03.858864  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:03.858929  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:03.886290  624674 cri.go:89] found id: ""
	I1202 22:35:03.886318  624674 logs.go:282] 0 containers: []
	W1202 22:35:03.886328  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:03.886338  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:03.886401  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:03.926420  624674 cri.go:89] found id: ""
	I1202 22:35:03.926447  624674 logs.go:282] 0 containers: []
	W1202 22:35:03.926456  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:03.926462  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:03.926529  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:03.953049  624674 cri.go:89] found id: ""
	I1202 22:35:03.953083  624674 logs.go:282] 0 containers: []
	W1202 22:35:03.953092  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:03.953098  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:03.953160  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:03.979378  624674 cri.go:89] found id: ""
	I1202 22:35:03.979402  624674 logs.go:282] 0 containers: []
	W1202 22:35:03.979416  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:03.979422  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:03.979480  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:04.008693  624674 cri.go:89] found id: ""
	I1202 22:35:04.008722  624674 logs.go:282] 0 containers: []
	W1202 22:35:04.008731  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:04.008739  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:04.008808  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:04.041628  624674 cri.go:89] found id: ""
	I1202 22:35:04.041655  624674 logs.go:282] 0 containers: []
	W1202 22:35:04.041664  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:04.041673  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:04.041686  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:04.109120  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:04.109156  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:04.126455  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:04.126482  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:04.210718  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:04.210741  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:04.210755  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:04.259274  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:04.259306  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:06.793071  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:06.803048  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:06.803120  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:06.828371  624674 cri.go:89] found id: ""
	I1202 22:35:06.828395  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.828404  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:06.828411  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:06.828476  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:06.860578  624674 cri.go:89] found id: ""
	I1202 22:35:06.860608  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.860617  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:06.860623  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:06.860732  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:06.885255  624674 cri.go:89] found id: ""
	I1202 22:35:06.885321  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.885336  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:06.885344  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:06.885402  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:06.910730  624674 cri.go:89] found id: ""
	I1202 22:35:06.910753  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.910762  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:06.910769  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:06.910832  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:06.935620  624674 cri.go:89] found id: ""
	I1202 22:35:06.935686  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.935708  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:06.935725  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:06.935816  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:06.963336  624674 cri.go:89] found id: ""
	I1202 22:35:06.963360  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.963372  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:06.963381  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:06.963439  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:06.988782  624674 cri.go:89] found id: ""
	I1202 22:35:06.988803  624674 logs.go:282] 0 containers: []
	W1202 22:35:06.988812  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:06.988818  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:06.988882  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:07.017154  624674 cri.go:89] found id: ""
	I1202 22:35:07.017224  624674 logs.go:282] 0 containers: []
	W1202 22:35:07.017241  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:07.017252  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:07.017265  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:07.085627  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:07.085664  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:07.101928  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:07.101956  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:07.182986  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:07.183027  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:07.183039  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:07.228959  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:07.229001  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:09.764023  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:09.773874  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:09.773952  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:09.801281  624674 cri.go:89] found id: ""
	I1202 22:35:09.801304  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.801312  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:09.801319  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:09.801379  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:09.829112  624674 cri.go:89] found id: ""
	I1202 22:35:09.829134  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.829142  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:09.829149  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:09.829209  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:09.855372  624674 cri.go:89] found id: ""
	I1202 22:35:09.855395  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.855404  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:09.855410  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:09.855467  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:09.879880  624674 cri.go:89] found id: ""
	I1202 22:35:09.879903  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.879911  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:09.879918  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:09.879977  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:09.906046  624674 cri.go:89] found id: ""
	I1202 22:35:09.906071  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.906080  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:09.906086  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:09.906145  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:09.934880  624674 cri.go:89] found id: ""
	I1202 22:35:09.934905  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.934914  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:09.934921  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:09.935035  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:09.960836  624674 cri.go:89] found id: ""
	I1202 22:35:09.960859  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.960873  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:09.960881  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:09.960939  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:09.986261  624674 cri.go:89] found id: ""
	I1202 22:35:09.986282  624674 logs.go:282] 0 containers: []
	W1202 22:35:09.986290  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:09.986299  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:09.986311  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:10.056929  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:10.056971  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:10.074324  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:10.074351  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:10.141434  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:10.141496  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:10.141516  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:10.182128  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:10.182163  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:12.722628  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:12.742815  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:12.744038  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:12.781866  624674 cri.go:89] found id: ""
	I1202 22:35:12.781888  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.781896  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:12.781903  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:12.781969  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:12.820572  624674 cri.go:89] found id: ""
	I1202 22:35:12.820595  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.820604  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:12.820610  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:12.820670  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:12.862166  624674 cri.go:89] found id: ""
	I1202 22:35:12.862189  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.862198  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:12.862205  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:12.862266  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:12.901924  624674 cri.go:89] found id: ""
	I1202 22:35:12.901947  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.901956  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:12.901963  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:12.902025  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:12.940791  624674 cri.go:89] found id: ""
	I1202 22:35:12.940865  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.940889  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:12.940908  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:12.940982  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:12.970891  624674 cri.go:89] found id: ""
	I1202 22:35:12.970916  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.970924  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:12.970930  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:12.970986  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:12.998396  624674 cri.go:89] found id: ""
	I1202 22:35:12.998421  624674 logs.go:282] 0 containers: []
	W1202 22:35:12.998430  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:12.998437  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:12.998496  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:13.033435  624674 cri.go:89] found id: ""
	I1202 22:35:13.033463  624674 logs.go:282] 0 containers: []
	W1202 22:35:13.033472  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:13.033481  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:13.033494  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:13.110237  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:13.110272  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:13.126523  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:13.126550  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:13.202615  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:13.202635  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:13.202650  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:13.245972  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:13.246007  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:15.778198  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:15.790004  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:15.790078  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:15.828249  624674 cri.go:89] found id: ""
	I1202 22:35:15.828270  624674 logs.go:282] 0 containers: []
	W1202 22:35:15.828280  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:15.828286  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:15.828342  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:15.871488  624674 cri.go:89] found id: ""
	I1202 22:35:15.871509  624674 logs.go:282] 0 containers: []
	W1202 22:35:15.871517  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:15.871523  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:15.871578  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:15.901665  624674 cri.go:89] found id: ""
	I1202 22:35:15.901687  624674 logs.go:282] 0 containers: []
	W1202 22:35:15.901695  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:15.901701  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:15.901757  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:15.935594  624674 cri.go:89] found id: ""
	I1202 22:35:15.935615  624674 logs.go:282] 0 containers: []
	W1202 22:35:15.935623  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:15.935629  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:15.935684  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:15.977269  624674 cri.go:89] found id: ""
	I1202 22:35:15.977292  624674 logs.go:282] 0 containers: []
	W1202 22:35:15.977300  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:15.977309  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:15.977365  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:16.009723  624674 cri.go:89] found id: ""
	I1202 22:35:16.009746  624674 logs.go:282] 0 containers: []
	W1202 22:35:16.009754  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:16.009761  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:16.009825  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:16.040806  624674 cri.go:89] found id: ""
	I1202 22:35:16.040828  624674 logs.go:282] 0 containers: []
	W1202 22:35:16.040836  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:16.040842  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:16.040904  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:16.083103  624674 cri.go:89] found id: ""
	I1202 22:35:16.083125  624674 logs.go:282] 0 containers: []
	W1202 22:35:16.083140  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:16.083150  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:16.083162  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:16.155456  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:16.155494  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:16.186602  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:16.186749  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:16.284004  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:16.284071  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:16.284096  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:16.339749  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:16.339837  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:18.881637  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:18.892123  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:18.892194  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:18.917142  624674 cri.go:89] found id: ""
	I1202 22:35:18.917167  624674 logs.go:282] 0 containers: []
	W1202 22:35:18.917175  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:18.917182  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:18.917241  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:18.946555  624674 cri.go:89] found id: ""
	I1202 22:35:18.946577  624674 logs.go:282] 0 containers: []
	W1202 22:35:18.946586  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:18.946593  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:18.946651  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:18.972247  624674 cri.go:89] found id: ""
	I1202 22:35:18.972272  624674 logs.go:282] 0 containers: []
	W1202 22:35:18.972280  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:18.972286  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:18.972386  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:18.998952  624674 cri.go:89] found id: ""
	I1202 22:35:18.998976  624674 logs.go:282] 0 containers: []
	W1202 22:35:18.998984  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:18.998991  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:18.999073  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:19.041167  624674 cri.go:89] found id: ""
	I1202 22:35:19.041191  624674 logs.go:282] 0 containers: []
	W1202 22:35:19.041200  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:19.041206  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:19.041264  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:19.069837  624674 cri.go:89] found id: ""
	I1202 22:35:19.069857  624674 logs.go:282] 0 containers: []
	W1202 22:35:19.069865  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:19.069873  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:19.069924  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:19.100062  624674 cri.go:89] found id: ""
	I1202 22:35:19.100087  624674 logs.go:282] 0 containers: []
	W1202 22:35:19.100096  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:19.100103  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:19.100160  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:19.142766  624674 cri.go:89] found id: ""
	I1202 22:35:19.142845  624674 logs.go:282] 0 containers: []
	W1202 22:35:19.142869  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:19.142890  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:19.142917  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:19.197892  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:19.197958  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:19.234510  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:19.234535  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:19.322322  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:19.322360  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:19.342590  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:19.342619  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:19.440001  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:21.940995  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:21.950968  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:21.951068  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:21.975885  624674 cri.go:89] found id: ""
	I1202 22:35:21.975909  624674 logs.go:282] 0 containers: []
	W1202 22:35:21.975918  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:21.975925  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:21.975982  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:22.005718  624674 cri.go:89] found id: ""
	I1202 22:35:22.005747  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.005757  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:22.005764  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:22.005919  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:22.033193  624674 cri.go:89] found id: ""
	I1202 22:35:22.033223  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.033232  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:22.033238  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:22.033302  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:22.059443  624674 cri.go:89] found id: ""
	I1202 22:35:22.059466  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.059475  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:22.059481  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:22.059539  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:22.091027  624674 cri.go:89] found id: ""
	I1202 22:35:22.091052  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.091061  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:22.091068  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:22.091139  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:22.117470  624674 cri.go:89] found id: ""
	I1202 22:35:22.117498  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.117507  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:22.117513  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:22.117571  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:22.143117  624674 cri.go:89] found id: ""
	I1202 22:35:22.143141  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.143150  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:22.143156  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:22.143216  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:22.180666  624674 cri.go:89] found id: ""
	I1202 22:35:22.180805  624674 logs.go:282] 0 containers: []
	W1202 22:35:22.180829  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:22.180852  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:22.180880  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:22.223372  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:22.223449  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:22.293400  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:22.293435  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:22.312331  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:22.312364  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:22.380385  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:22.380407  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:22.380420  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:24.923099  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:24.932978  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:24.933052  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:24.957484  624674 cri.go:89] found id: ""
	I1202 22:35:24.957506  624674 logs.go:282] 0 containers: []
	W1202 22:35:24.957514  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:24.957521  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:24.957579  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:24.982450  624674 cri.go:89] found id: ""
	I1202 22:35:24.982473  624674 logs.go:282] 0 containers: []
	W1202 22:35:24.982481  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:24.982488  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:24.982543  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:25.016614  624674 cri.go:89] found id: ""
	I1202 22:35:25.016641  624674 logs.go:282] 0 containers: []
	W1202 22:35:25.016651  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:25.016659  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:25.016728  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:25.049332  624674 cri.go:89] found id: ""
	I1202 22:35:25.049358  624674 logs.go:282] 0 containers: []
	W1202 22:35:25.049367  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:25.049374  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:25.049437  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:25.083655  624674 cri.go:89] found id: ""
	I1202 22:35:25.083679  624674 logs.go:282] 0 containers: []
	W1202 22:35:25.083689  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:25.083695  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:25.083759  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:25.111660  624674 cri.go:89] found id: ""
	I1202 22:35:25.111686  624674 logs.go:282] 0 containers: []
	W1202 22:35:25.111695  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:25.111702  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:25.111763  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:25.138123  624674 cri.go:89] found id: ""
	I1202 22:35:25.138149  624674 logs.go:282] 0 containers: []
	W1202 22:35:25.138157  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:25.138164  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:25.138225  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:25.174599  624674 cri.go:89] found id: ""
	I1202 22:35:25.174623  624674 logs.go:282] 0 containers: []
	W1202 22:35:25.174630  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:25.174639  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:25.174651  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:25.254027  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:25.254068  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:25.270799  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:25.270835  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:25.340126  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:25.340147  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:25.340161  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:25.381813  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:25.381849  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:27.915800  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:27.926354  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:27.926464  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:27.952952  624674 cri.go:89] found id: ""
	I1202 22:35:27.953029  624674 logs.go:282] 0 containers: []
	W1202 22:35:27.953052  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:27.953066  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:27.953143  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:27.979913  624674 cri.go:89] found id: ""
	I1202 22:35:27.979936  624674 logs.go:282] 0 containers: []
	W1202 22:35:27.979944  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:27.979951  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:27.980043  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:28.007635  624674 cri.go:89] found id: ""
	I1202 22:35:28.007674  624674 logs.go:282] 0 containers: []
	W1202 22:35:28.007683  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:28.007707  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:28.007800  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:28.039128  624674 cri.go:89] found id: ""
	I1202 22:35:28.039155  624674 logs.go:282] 0 containers: []
	W1202 22:35:28.039164  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:28.039171  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:28.039248  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:28.065575  624674 cri.go:89] found id: ""
	I1202 22:35:28.065605  624674 logs.go:282] 0 containers: []
	W1202 22:35:28.065614  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:28.065621  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:28.065681  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:28.090850  624674 cri.go:89] found id: ""
	I1202 22:35:28.090876  624674 logs.go:282] 0 containers: []
	W1202 22:35:28.090885  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:28.090892  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:28.090950  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:28.116371  624674 cri.go:89] found id: ""
	I1202 22:35:28.116395  624674 logs.go:282] 0 containers: []
	W1202 22:35:28.116404  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:28.116411  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:28.116468  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:28.142550  624674 cri.go:89] found id: ""
	I1202 22:35:28.142621  624674 logs.go:282] 0 containers: []
	W1202 22:35:28.142646  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:28.142690  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:28.142727  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:28.216487  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:28.216523  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:28.233809  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:28.233837  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:28.307755  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:28.307775  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:28.307799  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:28.348135  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:28.348169  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:30.877478  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:30.887549  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:30.887619  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:30.915311  624674 cri.go:89] found id: ""
	I1202 22:35:30.915334  624674 logs.go:282] 0 containers: []
	W1202 22:35:30.915343  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:30.915350  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:30.915406  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:30.940496  624674 cri.go:89] found id: ""
	I1202 22:35:30.940518  624674 logs.go:282] 0 containers: []
	W1202 22:35:30.940526  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:30.940533  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:30.940590  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:30.972328  624674 cri.go:89] found id: ""
	I1202 22:35:30.972351  624674 logs.go:282] 0 containers: []
	W1202 22:35:30.972359  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:30.972365  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:30.972421  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:30.996712  624674 cri.go:89] found id: ""
	I1202 22:35:30.996735  624674 logs.go:282] 0 containers: []
	W1202 22:35:30.996744  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:30.996751  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:30.996808  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:31.024689  624674 cri.go:89] found id: ""
	I1202 22:35:31.024727  624674 logs.go:282] 0 containers: []
	W1202 22:35:31.024738  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:31.024746  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:31.024812  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:31.051499  624674 cri.go:89] found id: ""
	I1202 22:35:31.051565  624674 logs.go:282] 0 containers: []
	W1202 22:35:31.051580  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:31.051587  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:31.051647  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:31.077425  624674 cri.go:89] found id: ""
	I1202 22:35:31.077451  624674 logs.go:282] 0 containers: []
	W1202 22:35:31.077468  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:31.077475  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:31.077534  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:31.102396  624674 cri.go:89] found id: ""
	I1202 22:35:31.102425  624674 logs.go:282] 0 containers: []
	W1202 22:35:31.102442  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:31.102451  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:31.102463  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:31.179158  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:31.179189  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:31.179202  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:31.222288  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:31.222326  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:31.252421  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:31.252451  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:31.324256  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:31.324291  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:33.840345  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:33.851545  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:33.851618  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:33.877527  624674 cri.go:89] found id: ""
	I1202 22:35:33.877602  624674 logs.go:282] 0 containers: []
	W1202 22:35:33.877624  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:33.877643  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:33.877727  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:33.904302  624674 cri.go:89] found id: ""
	I1202 22:35:33.904372  624674 logs.go:282] 0 containers: []
	W1202 22:35:33.904386  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:33.904394  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:33.911304  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:33.941124  624674 cri.go:89] found id: ""
	I1202 22:35:33.941148  624674 logs.go:282] 0 containers: []
	W1202 22:35:33.941157  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:33.941163  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:33.941225  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:33.970140  624674 cri.go:89] found id: ""
	I1202 22:35:33.970166  624674 logs.go:282] 0 containers: []
	W1202 22:35:33.970174  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:33.970181  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:33.970244  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:33.995414  624674 cri.go:89] found id: ""
	I1202 22:35:33.995438  624674 logs.go:282] 0 containers: []
	W1202 22:35:33.995447  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:33.995454  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:33.995513  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:34.024083  624674 cri.go:89] found id: ""
	I1202 22:35:34.024117  624674 logs.go:282] 0 containers: []
	W1202 22:35:34.024126  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:34.024133  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:34.024199  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:34.050602  624674 cri.go:89] found id: ""
	I1202 22:35:34.050629  624674 logs.go:282] 0 containers: []
	W1202 22:35:34.050642  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:34.050649  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:34.050729  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:34.077768  624674 cri.go:89] found id: ""
	I1202 22:35:34.077792  624674 logs.go:282] 0 containers: []
	W1202 22:35:34.077801  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:34.077810  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:34.077825  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:34.145340  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:34.145376  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:34.162103  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:34.162130  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:34.239104  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:34.239168  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:34.239188  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:34.280195  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:34.280231  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:36.816347  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:36.828222  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:36.828326  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:36.858934  624674 cri.go:89] found id: ""
	I1202 22:35:36.858971  624674 logs.go:282] 0 containers: []
	W1202 22:35:36.858980  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:36.858987  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:36.859077  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:36.885729  624674 cri.go:89] found id: ""
	I1202 22:35:36.885752  624674 logs.go:282] 0 containers: []
	W1202 22:35:36.885760  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:36.885766  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:36.885833  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:36.914095  624674 cri.go:89] found id: ""
	I1202 22:35:36.914115  624674 logs.go:282] 0 containers: []
	W1202 22:35:36.914124  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:36.914130  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:36.914189  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:36.939588  624674 cri.go:89] found id: ""
	I1202 22:35:36.939616  624674 logs.go:282] 0 containers: []
	W1202 22:35:36.939626  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:36.939633  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:36.939692  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:36.969512  624674 cri.go:89] found id: ""
	I1202 22:35:36.969535  624674 logs.go:282] 0 containers: []
	W1202 22:35:36.969544  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:36.969550  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:36.969614  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:36.995415  624674 cri.go:89] found id: ""
	I1202 22:35:36.995438  624674 logs.go:282] 0 containers: []
	W1202 22:35:36.995447  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:36.995456  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:36.995512  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:37.026071  624674 cri.go:89] found id: ""
	I1202 22:35:37.026102  624674 logs.go:282] 0 containers: []
	W1202 22:35:37.026112  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:37.026126  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:37.026194  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:37.053215  624674 cri.go:89] found id: ""
	I1202 22:35:37.053238  624674 logs.go:282] 0 containers: []
	W1202 22:35:37.053246  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:37.053256  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:37.053267  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:37.120512  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:37.120559  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:37.137056  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:37.137086  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:37.229837  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:37.229924  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:37.229951  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:37.275069  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:37.275106  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:39.808669  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:39.818631  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:39.818700  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:39.843418  624674 cri.go:89] found id: ""
	I1202 22:35:39.843442  624674 logs.go:282] 0 containers: []
	W1202 22:35:39.843450  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:39.843456  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:39.843513  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:39.872785  624674 cri.go:89] found id: ""
	I1202 22:35:39.872846  624674 logs.go:282] 0 containers: []
	W1202 22:35:39.872868  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:39.872882  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:39.872947  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:39.897767  624674 cri.go:89] found id: ""
	I1202 22:35:39.897792  624674 logs.go:282] 0 containers: []
	W1202 22:35:39.897800  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:39.897807  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:39.897868  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:39.923310  624674 cri.go:89] found id: ""
	I1202 22:35:39.923332  624674 logs.go:282] 0 containers: []
	W1202 22:35:39.923340  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:39.923347  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:39.923406  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:39.950712  624674 cri.go:89] found id: ""
	I1202 22:35:39.950737  624674 logs.go:282] 0 containers: []
	W1202 22:35:39.950746  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:39.950752  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:39.950814  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:39.975584  624674 cri.go:89] found id: ""
	I1202 22:35:39.975607  624674 logs.go:282] 0 containers: []
	W1202 22:35:39.975616  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:39.975623  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:39.975684  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:40.024204  624674 cri.go:89] found id: ""
	I1202 22:35:40.024240  624674 logs.go:282] 0 containers: []
	W1202 22:35:40.024249  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:40.024257  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:40.024324  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:40.056660  624674 cri.go:89] found id: ""
	I1202 22:35:40.056683  624674 logs.go:282] 0 containers: []
	W1202 22:35:40.056691  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:40.056701  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:40.056713  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:40.125026  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:40.125054  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:40.125069  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:40.169412  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:40.169454  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:40.200268  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:40.200297  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:40.277263  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:40.277302  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:42.794214  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:42.804890  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:42.804965  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:42.840092  624674 cri.go:89] found id: ""
	I1202 22:35:42.840119  624674 logs.go:282] 0 containers: []
	W1202 22:35:42.840128  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:42.840135  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:42.840199  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:42.866120  624674 cri.go:89] found id: ""
	I1202 22:35:42.866145  624674 logs.go:282] 0 containers: []
	W1202 22:35:42.866154  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:42.866160  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:42.866221  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:42.892967  624674 cri.go:89] found id: ""
	I1202 22:35:42.892994  624674 logs.go:282] 0 containers: []
	W1202 22:35:42.893003  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:42.893009  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:42.893068  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:42.925471  624674 cri.go:89] found id: ""
	I1202 22:35:42.925500  624674 logs.go:282] 0 containers: []
	W1202 22:35:42.925508  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:42.925515  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:42.925579  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:42.961757  624674 cri.go:89] found id: ""
	I1202 22:35:42.961784  624674 logs.go:282] 0 containers: []
	W1202 22:35:42.961802  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:42.961809  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:42.961882  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:42.990626  624674 cri.go:89] found id: ""
	I1202 22:35:42.990653  624674 logs.go:282] 0 containers: []
	W1202 22:35:42.990661  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:42.990668  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:42.990725  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:43.021925  624674 cri.go:89] found id: ""
	I1202 22:35:43.021948  624674 logs.go:282] 0 containers: []
	W1202 22:35:43.021957  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:43.021963  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:43.022024  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:43.052612  624674 cri.go:89] found id: ""
	I1202 22:35:43.052634  624674 logs.go:282] 0 containers: []
	W1202 22:35:43.052641  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:43.052650  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:43.052663  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:43.083763  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:43.083795  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:43.154117  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:43.154160  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:43.181205  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:43.181235  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:43.302043  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:43.302065  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:43.302080  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:45.851261  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:45.862762  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:45.862827  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:45.904367  624674 cri.go:89] found id: ""
	I1202 22:35:45.904403  624674 logs.go:282] 0 containers: []
	W1202 22:35:45.904415  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:45.904422  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:45.904493  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:45.934799  624674 cri.go:89] found id: ""
	I1202 22:35:45.934822  624674 logs.go:282] 0 containers: []
	W1202 22:35:45.934830  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:45.934836  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:45.934901  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:45.965314  624674 cri.go:89] found id: ""
	I1202 22:35:45.965338  624674 logs.go:282] 0 containers: []
	W1202 22:35:45.965346  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:45.965352  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:45.965410  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:45.997387  624674 cri.go:89] found id: ""
	I1202 22:35:45.997408  624674 logs.go:282] 0 containers: []
	W1202 22:35:45.997417  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:45.997425  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:45.997483  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:46.061297  624674 cri.go:89] found id: ""
	I1202 22:35:46.061323  624674 logs.go:282] 0 containers: []
	W1202 22:35:46.061331  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:46.061338  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:46.061404  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:46.088124  624674 cri.go:89] found id: ""
	I1202 22:35:46.088155  624674 logs.go:282] 0 containers: []
	W1202 22:35:46.088163  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:46.088170  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:46.088242  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:46.114035  624674 cri.go:89] found id: ""
	I1202 22:35:46.114071  624674 logs.go:282] 0 containers: []
	W1202 22:35:46.114080  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:46.114090  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:46.114156  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:46.141875  624674 cri.go:89] found id: ""
	I1202 22:35:46.141897  624674 logs.go:282] 0 containers: []
	W1202 22:35:46.141905  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:46.141928  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:46.141942  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:46.225235  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:46.225352  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:46.244858  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:46.244958  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:46.320585  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:46.320604  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:46.320616  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:46.362969  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:46.363013  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:48.893487  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:48.903958  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:48.904037  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:48.932852  624674 cri.go:89] found id: ""
	I1202 22:35:48.932876  624674 logs.go:282] 0 containers: []
	W1202 22:35:48.932884  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:48.932891  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:48.932959  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:48.959218  624674 cri.go:89] found id: ""
	I1202 22:35:48.959243  624674 logs.go:282] 0 containers: []
	W1202 22:35:48.959252  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:48.959258  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:48.959318  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:48.990444  624674 cri.go:89] found id: ""
	I1202 22:35:48.990467  624674 logs.go:282] 0 containers: []
	W1202 22:35:48.990476  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:48.990482  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:48.990545  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:49.017643  624674 cri.go:89] found id: ""
	I1202 22:35:49.017668  624674 logs.go:282] 0 containers: []
	W1202 22:35:49.017677  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:49.017684  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:49.017758  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:49.044514  624674 cri.go:89] found id: ""
	I1202 22:35:49.044575  624674 logs.go:282] 0 containers: []
	W1202 22:35:49.044598  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:49.044617  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:49.044692  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:49.072038  624674 cri.go:89] found id: ""
	I1202 22:35:49.072068  624674 logs.go:282] 0 containers: []
	W1202 22:35:49.072077  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:49.072084  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:49.072147  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:49.099314  624674 cri.go:89] found id: ""
	I1202 22:35:49.099339  624674 logs.go:282] 0 containers: []
	W1202 22:35:49.099348  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:49.099356  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:49.099418  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:49.125204  624674 cri.go:89] found id: ""
	I1202 22:35:49.125228  624674 logs.go:282] 0 containers: []
	W1202 22:35:49.125237  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:49.125246  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:49.125258  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:49.192857  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:49.192892  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:49.209154  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:49.209182  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:49.276206  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:49.276279  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:49.276300  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:49.316888  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:49.316923  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:51.847991  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:51.859961  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:51.860033  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:51.885496  624674 cri.go:89] found id: ""
	I1202 22:35:51.885522  624674 logs.go:282] 0 containers: []
	W1202 22:35:51.885531  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:51.885538  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:51.885598  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:51.913561  624674 cri.go:89] found id: ""
	I1202 22:35:51.913586  624674 logs.go:282] 0 containers: []
	W1202 22:35:51.913596  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:51.913602  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:51.913666  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:51.941964  624674 cri.go:89] found id: ""
	I1202 22:35:51.941989  624674 logs.go:282] 0 containers: []
	W1202 22:35:51.941998  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:51.942004  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:51.942063  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:51.972038  624674 cri.go:89] found id: ""
	I1202 22:35:51.972062  624674 logs.go:282] 0 containers: []
	W1202 22:35:51.972070  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:51.972077  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:51.972133  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:51.997028  624674 cri.go:89] found id: ""
	I1202 22:35:51.997052  624674 logs.go:282] 0 containers: []
	W1202 22:35:51.997060  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:51.997067  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:51.997154  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:52.027893  624674 cri.go:89] found id: ""
	I1202 22:35:52.027916  624674 logs.go:282] 0 containers: []
	W1202 22:35:52.027930  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:52.027938  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:52.028013  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:52.055274  624674 cri.go:89] found id: ""
	I1202 22:35:52.055300  624674 logs.go:282] 0 containers: []
	W1202 22:35:52.055309  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:52.055316  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:52.055380  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:52.082005  624674 cri.go:89] found id: ""
	I1202 22:35:52.082030  624674 logs.go:282] 0 containers: []
	W1202 22:35:52.082039  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:52.082048  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:52.082060  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:52.098247  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:52.098276  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:52.165903  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:52.165923  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:52.165936  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:52.207689  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:52.207722  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:52.238965  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:52.239019  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:54.812181  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:54.823687  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:54.823754  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:54.862117  624674 cri.go:89] found id: ""
	I1202 22:35:54.862139  624674 logs.go:282] 0 containers: []
	W1202 22:35:54.862147  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:54.862154  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:54.862223  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:54.929765  624674 cri.go:89] found id: ""
	I1202 22:35:54.929792  624674 logs.go:282] 0 containers: []
	W1202 22:35:54.929800  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:54.929808  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:54.929879  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:54.969876  624674 cri.go:89] found id: ""
	I1202 22:35:54.969913  624674 logs.go:282] 0 containers: []
	W1202 22:35:54.969926  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:54.969938  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:54.970026  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:55.022270  624674 cri.go:89] found id: ""
	I1202 22:35:55.022294  624674 logs.go:282] 0 containers: []
	W1202 22:35:55.022304  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:55.022311  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:55.022372  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:55.061636  624674 cri.go:89] found id: ""
	I1202 22:35:55.061659  624674 logs.go:282] 0 containers: []
	W1202 22:35:55.061667  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:55.061674  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:55.061733  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:55.092384  624674 cri.go:89] found id: ""
	I1202 22:35:55.092414  624674 logs.go:282] 0 containers: []
	W1202 22:35:55.092423  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:55.092430  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:55.092488  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:55.124591  624674 cri.go:89] found id: ""
	I1202 22:35:55.124613  624674 logs.go:282] 0 containers: []
	W1202 22:35:55.124622  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:55.124628  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:55.124732  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:55.152530  624674 cri.go:89] found id: ""
	I1202 22:35:55.152551  624674 logs.go:282] 0 containers: []
	W1202 22:35:55.152560  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:55.152569  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:55.152580  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:55.236317  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:55.236351  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:55.252887  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:55.252917  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:55.317544  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:55.317562  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:55.317574  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:35:55.359381  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:55.359416  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:57.891132  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:35:57.903460  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:35:57.903536  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:35:57.944949  624674 cri.go:89] found id: ""
	I1202 22:35:57.944973  624674 logs.go:282] 0 containers: []
	W1202 22:35:57.944981  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:35:57.944989  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:35:57.945047  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:35:57.991070  624674 cri.go:89] found id: ""
	I1202 22:35:57.991098  624674 logs.go:282] 0 containers: []
	W1202 22:35:57.991107  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:35:57.991113  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:35:57.991173  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:35:58.030754  624674 cri.go:89] found id: ""
	I1202 22:35:58.030782  624674 logs.go:282] 0 containers: []
	W1202 22:35:58.030791  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:35:58.030799  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:35:58.030863  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:35:58.064914  624674 cri.go:89] found id: ""
	I1202 22:35:58.064940  624674 logs.go:282] 0 containers: []
	W1202 22:35:58.064950  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:35:58.064956  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:35:58.065017  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:35:58.107713  624674 cri.go:89] found id: ""
	I1202 22:35:58.107738  624674 logs.go:282] 0 containers: []
	W1202 22:35:58.107747  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:35:58.107753  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:35:58.107814  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:35:58.144665  624674 cri.go:89] found id: ""
	I1202 22:35:58.144689  624674 logs.go:282] 0 containers: []
	W1202 22:35:58.144697  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:35:58.144704  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:35:58.144763  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:35:58.176217  624674 cri.go:89] found id: ""
	I1202 22:35:58.176245  624674 logs.go:282] 0 containers: []
	W1202 22:35:58.176254  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:35:58.176260  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:35:58.176325  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:35:58.212098  624674 cri.go:89] found id: ""
	I1202 22:35:58.212126  624674 logs.go:282] 0 containers: []
	W1202 22:35:58.212134  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:35:58.212144  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:35:58.212162  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:35:58.245951  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:35:58.245976  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:35:58.329727  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:35:58.329812  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:35:58.348656  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:35:58.348684  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:35:58.508290  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:35:58.508360  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:35:58.508387  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:01.070136  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:01.080661  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:01.080763  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:01.107026  624674 cri.go:89] found id: ""
	I1202 22:36:01.107053  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.107062  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:01.107069  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:01.107179  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:01.135487  624674 cri.go:89] found id: ""
	I1202 22:36:01.135523  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.135533  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:01.135539  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:01.135601  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:01.162975  624674 cri.go:89] found id: ""
	I1202 22:36:01.163022  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.163031  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:01.163038  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:01.163122  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:01.190942  624674 cri.go:89] found id: ""
	I1202 22:36:01.190969  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.190977  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:01.190984  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:01.191068  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:01.223130  624674 cri.go:89] found id: ""
	I1202 22:36:01.223157  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.223166  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:01.223173  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:01.223236  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:01.252018  624674 cri.go:89] found id: ""
	I1202 22:36:01.252044  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.252053  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:01.252060  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:01.252123  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:01.285142  624674 cri.go:89] found id: ""
	I1202 22:36:01.285169  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.285178  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:01.285185  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:01.285247  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:01.334298  624674 cri.go:89] found id: ""
	I1202 22:36:01.334327  624674 logs.go:282] 0 containers: []
	W1202 22:36:01.334336  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:01.334346  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:01.334398  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:01.414338  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:01.414377  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:01.440729  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:01.440760  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:01.550175  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:01.550198  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:01.550214  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:01.599590  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:01.599630  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:04.133691  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:04.143757  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:04.143826  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:04.169777  624674 cri.go:89] found id: ""
	I1202 22:36:04.169799  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.169808  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:04.169814  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:04.169871  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:04.198173  624674 cri.go:89] found id: ""
	I1202 22:36:04.198195  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.198204  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:04.198210  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:04.198274  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:04.227757  624674 cri.go:89] found id: ""
	I1202 22:36:04.227782  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.227790  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:04.227797  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:04.227855  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:04.257391  624674 cri.go:89] found id: ""
	I1202 22:36:04.257420  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.257430  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:04.257437  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:04.257493  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:04.286182  624674 cri.go:89] found id: ""
	I1202 22:36:04.286209  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.286218  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:04.286225  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:04.286281  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:04.313658  624674 cri.go:89] found id: ""
	I1202 22:36:04.313683  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.313695  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:04.313701  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:04.313762  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:04.340877  624674 cri.go:89] found id: ""
	I1202 22:36:04.340902  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.340911  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:04.340917  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:04.340974  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:04.367638  624674 cri.go:89] found id: ""
	I1202 22:36:04.367668  624674 logs.go:282] 0 containers: []
	W1202 22:36:04.367676  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:04.367686  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:04.367697  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:04.396393  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:04.396471  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:04.464749  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:04.464792  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:04.481004  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:04.481033  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:04.548724  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:04.548755  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:04.548768  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:07.089819  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:07.099895  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:07.099968  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:07.128918  624674 cri.go:89] found id: ""
	I1202 22:36:07.128942  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.128951  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:07.128957  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:07.129021  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:07.154976  624674 cri.go:89] found id: ""
	I1202 22:36:07.155026  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.155035  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:07.155042  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:07.155100  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:07.180991  624674 cri.go:89] found id: ""
	I1202 22:36:07.181019  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.181027  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:07.181034  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:07.181089  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:07.205928  624674 cri.go:89] found id: ""
	I1202 22:36:07.205952  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.205961  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:07.205968  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:07.206028  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:07.233909  624674 cri.go:89] found id: ""
	I1202 22:36:07.233976  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.234000  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:07.234018  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:07.234108  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:07.262206  624674 cri.go:89] found id: ""
	I1202 22:36:07.262273  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.262295  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:07.262313  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:07.262399  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:07.288094  624674 cri.go:89] found id: ""
	I1202 22:36:07.288120  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.288134  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:07.288141  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:07.288198  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:07.313278  624674 cri.go:89] found id: ""
	I1202 22:36:07.313300  624674 logs.go:282] 0 containers: []
	W1202 22:36:07.313309  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:07.313318  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:07.313333  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:07.383521  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:07.383560  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:07.399483  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:07.399511  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:07.461531  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:07.461594  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:07.461614  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:07.501972  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:07.502009  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:10.032759  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:10.044915  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:10.044992  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:10.075690  624674 cri.go:89] found id: ""
	I1202 22:36:10.075714  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.075722  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:10.075729  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:10.075788  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:10.105995  624674 cri.go:89] found id: ""
	I1202 22:36:10.106017  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.106026  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:10.106032  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:10.106092  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:10.131894  624674 cri.go:89] found id: ""
	I1202 22:36:10.131921  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.131930  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:10.131937  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:10.131994  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:10.156679  624674 cri.go:89] found id: ""
	I1202 22:36:10.156702  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.156710  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:10.156717  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:10.156774  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:10.186371  624674 cri.go:89] found id: ""
	I1202 22:36:10.186394  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.186402  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:10.186409  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:10.186465  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:10.211269  624674 cri.go:89] found id: ""
	I1202 22:36:10.211291  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.211299  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:10.211306  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:10.211361  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:10.235928  624674 cri.go:89] found id: ""
	I1202 22:36:10.235952  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.235960  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:10.235967  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:10.236026  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:10.260136  624674 cri.go:89] found id: ""
	I1202 22:36:10.260158  624674 logs.go:282] 0 containers: []
	W1202 22:36:10.260166  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:10.260175  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:10.260189  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:10.300375  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:10.300407  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:10.330341  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:10.330414  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:10.398440  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:10.398477  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:10.415856  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:10.415885  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:10.486455  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:12.987133  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:12.997462  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:12.997529  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:13.025574  624674 cri.go:89] found id: ""
	I1202 22:36:13.025600  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.025609  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:13.025617  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:13.025680  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:13.056119  624674 cri.go:89] found id: ""
	I1202 22:36:13.056145  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.056155  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:13.056163  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:13.056223  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:13.083042  624674 cri.go:89] found id: ""
	I1202 22:36:13.083068  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.083082  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:13.083089  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:13.083147  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:13.113027  624674 cri.go:89] found id: ""
	I1202 22:36:13.113052  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.113061  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:13.113068  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:13.113131  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:13.139457  624674 cri.go:89] found id: ""
	I1202 22:36:13.139481  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.139490  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:13.139496  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:13.139555  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:13.164634  624674 cri.go:89] found id: ""
	I1202 22:36:13.164663  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.164673  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:13.164680  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:13.164744  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:13.190105  624674 cri.go:89] found id: ""
	I1202 22:36:13.190182  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.190205  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:13.190224  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:13.190308  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:13.215050  624674 cri.go:89] found id: ""
	I1202 22:36:13.215073  624674 logs.go:282] 0 containers: []
	W1202 22:36:13.215081  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:13.215091  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:13.215103  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:13.243868  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:13.243894  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:13.310467  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:13.310501  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:13.326858  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:13.326947  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:13.393184  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:13.393204  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:13.393225  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:15.935124  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:15.945622  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:15.945697  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:15.981214  624674 cri.go:89] found id: ""
	I1202 22:36:15.981240  624674 logs.go:282] 0 containers: []
	W1202 22:36:15.981248  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:15.981255  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:15.981311  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:16.009275  624674 cri.go:89] found id: ""
	I1202 22:36:16.009305  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.009318  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:16.009324  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:16.009390  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:16.036433  624674 cri.go:89] found id: ""
	I1202 22:36:16.036456  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.036464  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:16.036471  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:16.036532  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:16.062834  624674 cri.go:89] found id: ""
	I1202 22:36:16.062857  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.062866  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:16.062872  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:16.062936  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:16.089677  624674 cri.go:89] found id: ""
	I1202 22:36:16.089701  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.089710  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:16.089717  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:16.089776  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:16.120631  624674 cri.go:89] found id: ""
	I1202 22:36:16.120660  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.120669  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:16.120676  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:16.120736  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:16.146547  624674 cri.go:89] found id: ""
	I1202 22:36:16.146572  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.146581  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:16.146588  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:16.146649  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:16.171442  624674 cri.go:89] found id: ""
	I1202 22:36:16.171516  624674 logs.go:282] 0 containers: []
	W1202 22:36:16.171531  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:16.171541  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:16.171552  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:16.213749  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:16.213785  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:16.241827  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:16.241854  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:16.310085  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:16.310120  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:16.326775  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:16.326805  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:16.394450  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:18.894927  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:18.905080  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:18.905191  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:18.938714  624674 cri.go:89] found id: ""
	I1202 22:36:18.938740  624674 logs.go:282] 0 containers: []
	W1202 22:36:18.938748  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:18.938755  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:18.938814  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:18.975325  624674 cri.go:89] found id: ""
	I1202 22:36:18.975346  624674 logs.go:282] 0 containers: []
	W1202 22:36:18.975354  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:18.975361  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:18.975417  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:19.001422  624674 cri.go:89] found id: ""
	I1202 22:36:19.001470  624674 logs.go:282] 0 containers: []
	W1202 22:36:19.001480  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:19.001488  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:19.001588  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:19.031715  624674 cri.go:89] found id: ""
	I1202 22:36:19.031742  624674 logs.go:282] 0 containers: []
	W1202 22:36:19.031751  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:19.031757  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:19.031842  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:19.057207  624674 cri.go:89] found id: ""
	I1202 22:36:19.057232  624674 logs.go:282] 0 containers: []
	W1202 22:36:19.057241  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:19.057248  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:19.057307  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:19.087597  624674 cri.go:89] found id: ""
	I1202 22:36:19.087622  624674 logs.go:282] 0 containers: []
	W1202 22:36:19.087631  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:19.087638  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:19.087697  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:19.114346  624674 cri.go:89] found id: ""
	I1202 22:36:19.114377  624674 logs.go:282] 0 containers: []
	W1202 22:36:19.114385  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:19.114392  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:19.114450  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:19.144913  624674 cri.go:89] found id: ""
	I1202 22:36:19.144940  624674 logs.go:282] 0 containers: []
	W1202 22:36:19.144948  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:19.144958  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:19.144969  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:19.216091  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:19.216127  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:19.232364  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:19.232392  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:19.302693  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:19.302713  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:19.302725  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:19.346255  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:19.346299  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:21.877133  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:21.891688  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:21.891758  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:21.946160  624674 cri.go:89] found id: ""
	I1202 22:36:21.946196  624674 logs.go:282] 0 containers: []
	W1202 22:36:21.946204  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:21.946211  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:21.946271  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:22.017852  624674 cri.go:89] found id: ""
	I1202 22:36:22.017879  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.017888  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:22.017894  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:22.017957  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:22.077470  624674 cri.go:89] found id: ""
	I1202 22:36:22.077498  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.077507  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:22.077514  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:22.077570  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:22.119611  624674 cri.go:89] found id: ""
	I1202 22:36:22.119639  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.119647  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:22.119654  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:22.119713  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:22.149631  624674 cri.go:89] found id: ""
	I1202 22:36:22.149658  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.149667  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:22.149673  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:22.149728  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:22.175973  624674 cri.go:89] found id: ""
	I1202 22:36:22.175996  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.176005  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:22.176012  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:22.176086  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:22.200855  624674 cri.go:89] found id: ""
	I1202 22:36:22.200881  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.200890  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:22.200897  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:22.200957  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:22.225532  624674 cri.go:89] found id: ""
	I1202 22:36:22.225556  624674 logs.go:282] 0 containers: []
	W1202 22:36:22.225564  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:22.225573  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:22.225590  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:22.257399  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:22.257427  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:22.323719  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:22.323755  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:22.340013  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:22.340044  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:22.402963  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:22.402987  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:22.403012  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:24.943180  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:24.956701  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:24.956783  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:24.989836  624674 cri.go:89] found id: ""
	I1202 22:36:24.989861  624674 logs.go:282] 0 containers: []
	W1202 22:36:24.989887  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:24.989893  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:24.989956  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:25.029676  624674 cri.go:89] found id: ""
	I1202 22:36:25.029706  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.029734  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:25.029742  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:25.029816  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:25.062663  624674 cri.go:89] found id: ""
	I1202 22:36:25.062688  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.062697  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:25.062722  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:25.062791  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:25.112563  624674 cri.go:89] found id: ""
	I1202 22:36:25.112587  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.112595  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:25.112601  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:25.112664  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:25.144784  624674 cri.go:89] found id: ""
	I1202 22:36:25.144810  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.144819  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:25.144826  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:25.144884  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:25.207284  624674 cri.go:89] found id: ""
	I1202 22:36:25.207305  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.207314  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:25.207320  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:25.207386  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:25.266360  624674 cri.go:89] found id: ""
	I1202 22:36:25.266382  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.266390  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:25.266396  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:25.266456  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:25.306465  624674 cri.go:89] found id: ""
	I1202 22:36:25.306486  624674 logs.go:282] 0 containers: []
	W1202 22:36:25.306493  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:25.306502  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:25.306513  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:25.341109  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:25.341146  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:25.417102  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:25.417189  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:25.439833  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:25.439860  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:25.523374  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:25.523390  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:25.523401  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:28.075987  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:28.090076  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:28.090150  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:28.120937  624674 cri.go:89] found id: ""
	I1202 22:36:28.120964  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.120982  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:28.120989  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:28.121052  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:28.158260  624674 cri.go:89] found id: ""
	I1202 22:36:28.158287  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.158296  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:28.158303  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:28.158362  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:28.245261  624674 cri.go:89] found id: ""
	I1202 22:36:28.245288  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.245296  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:28.245303  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:28.245362  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:28.276637  624674 cri.go:89] found id: ""
	I1202 22:36:28.276664  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.276673  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:28.276680  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:28.276735  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:28.302879  624674 cri.go:89] found id: ""
	I1202 22:36:28.302905  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.302913  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:28.302919  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:28.302975  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:28.333769  624674 cri.go:89] found id: ""
	I1202 22:36:28.333794  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.333803  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:28.333810  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:28.333864  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:28.369065  624674 cri.go:89] found id: ""
	I1202 22:36:28.369092  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.369101  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:28.369108  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:28.369207  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:28.413525  624674 cri.go:89] found id: ""
	I1202 22:36:28.413551  624674 logs.go:282] 0 containers: []
	W1202 22:36:28.413560  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:28.413569  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:28.413581  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:28.492266  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:28.492351  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:28.512117  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:28.512147  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:28.610461  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:28.610486  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:28.610498  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:28.655781  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:28.655814  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:31.200323  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:31.233946  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:31.234022  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:31.271139  624674 cri.go:89] found id: ""
	I1202 22:36:31.271168  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.271176  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:31.271183  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:31.271244  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:31.309775  624674 cri.go:89] found id: ""
	I1202 22:36:31.309801  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.309810  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:31.309816  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:31.309878  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:31.337422  624674 cri.go:89] found id: ""
	I1202 22:36:31.337449  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.337458  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:31.337464  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:31.337549  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:31.362626  624674 cri.go:89] found id: ""
	I1202 22:36:31.362654  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.362663  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:31.362670  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:31.362733  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:31.389565  624674 cri.go:89] found id: ""
	I1202 22:36:31.389591  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.389599  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:31.389605  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:31.389660  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:31.417274  624674 cri.go:89] found id: ""
	I1202 22:36:31.417299  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.417308  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:31.417314  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:31.417385  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:31.445316  624674 cri.go:89] found id: ""
	I1202 22:36:31.445343  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.445352  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:31.445357  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:31.445415  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:31.475547  624674 cri.go:89] found id: ""
	I1202 22:36:31.475568  624674 logs.go:282] 0 containers: []
	W1202 22:36:31.475576  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:31.475585  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:31.475596  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:31.517532  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:31.517605  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:31.598326  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:31.598408  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:31.615470  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:31.615496  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:31.696543  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:31.696606  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:31.696641  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:34.243122  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:34.253328  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:34.253398  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:34.278887  624674 cri.go:89] found id: ""
	I1202 22:36:34.278911  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.278920  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:34.278926  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:34.278983  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:34.304964  624674 cri.go:89] found id: ""
	I1202 22:36:34.304988  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.304996  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:34.305002  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:34.305061  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:34.329934  624674 cri.go:89] found id: ""
	I1202 22:36:34.330031  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.330047  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:34.330054  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:34.330148  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:34.355474  624674 cri.go:89] found id: ""
	I1202 22:36:34.355546  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.355570  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:34.355589  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:34.355677  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:34.387775  624674 cri.go:89] found id: ""
	I1202 22:36:34.387800  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.387809  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:34.387815  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:34.387875  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:34.412521  624674 cri.go:89] found id: ""
	I1202 22:36:34.412586  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.412600  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:34.412608  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:34.412668  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:34.436412  624674 cri.go:89] found id: ""
	I1202 22:36:34.436434  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.436443  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:34.436450  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:34.436507  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:34.460778  624674 cri.go:89] found id: ""
	I1202 22:36:34.460802  624674 logs.go:282] 0 containers: []
	W1202 22:36:34.460811  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:34.460820  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:34.460831  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:34.487905  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:34.487931  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:34.555671  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:34.555709  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:34.571846  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:34.571875  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:34.651418  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:34.651438  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:34.651451  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:37.192093  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:37.206303  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:37.206387  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:37.242798  624674 cri.go:89] found id: ""
	I1202 22:36:37.242826  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.242834  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:37.242841  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:37.242899  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:37.267610  624674 cri.go:89] found id: ""
	I1202 22:36:37.267633  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.267641  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:37.267648  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:37.267704  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:37.293068  624674 cri.go:89] found id: ""
	I1202 22:36:37.293132  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.293174  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:37.293182  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:37.293239  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:37.321793  624674 cri.go:89] found id: ""
	I1202 22:36:37.321826  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.321836  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:37.321842  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:37.321915  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:37.347181  624674 cri.go:89] found id: ""
	I1202 22:36:37.347205  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.347214  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:37.347226  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:37.347283  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:37.372762  624674 cri.go:89] found id: ""
	I1202 22:36:37.372788  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.372797  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:37.372803  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:37.372881  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:37.397583  624674 cri.go:89] found id: ""
	I1202 22:36:37.397651  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.397671  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:37.397690  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:37.397773  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:37.427056  624674 cri.go:89] found id: ""
	I1202 22:36:37.427082  624674 logs.go:282] 0 containers: []
	W1202 22:36:37.427090  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:37.427107  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:37.427119  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:37.495210  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:37.495280  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:37.495306  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:37.540803  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:37.540855  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:37.572505  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:37.572534  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:37.639520  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:37.639556  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:40.157052  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:40.171553  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:40.171625  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:40.203095  624674 cri.go:89] found id: ""
	I1202 22:36:40.203121  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.203131  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:40.203137  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:40.203195  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:40.234197  624674 cri.go:89] found id: ""
	I1202 22:36:40.234224  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.234233  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:40.234240  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:40.234296  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:40.262930  624674 cri.go:89] found id: ""
	I1202 22:36:40.262959  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.262969  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:40.262975  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:40.263058  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:40.288197  624674 cri.go:89] found id: ""
	I1202 22:36:40.288224  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.288233  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:40.288241  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:40.288301  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:40.314279  624674 cri.go:89] found id: ""
	I1202 22:36:40.314310  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.314325  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:40.314332  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:40.314391  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:40.342724  624674 cri.go:89] found id: ""
	I1202 22:36:40.342747  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.342756  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:40.342763  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:40.342818  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:40.368073  624674 cri.go:89] found id: ""
	I1202 22:36:40.368094  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.368102  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:40.368109  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:40.368167  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:40.392468  624674 cri.go:89] found id: ""
	I1202 22:36:40.392535  624674 logs.go:282] 0 containers: []
	W1202 22:36:40.392560  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:40.392576  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:40.392588  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:40.432774  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:40.432809  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:40.460113  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:40.460140  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:40.528080  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:40.528116  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:40.543812  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:40.543840  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:40.612381  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:43.112603  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:43.122962  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:36:43.123042  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:36:43.153249  624674 cri.go:89] found id: ""
	I1202 22:36:43.153273  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.153281  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:36:43.153287  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:36:43.153349  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:36:43.192580  624674 cri.go:89] found id: ""
	I1202 22:36:43.192601  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.192609  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:36:43.192615  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:36:43.192672  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:36:43.225098  624674 cri.go:89] found id: ""
	I1202 22:36:43.225119  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.225128  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:36:43.225135  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:36:43.225197  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:36:43.253256  624674 cri.go:89] found id: ""
	I1202 22:36:43.253342  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.253365  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:36:43.253383  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:36:43.253486  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:36:43.294399  624674 cri.go:89] found id: ""
	I1202 22:36:43.294421  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.294429  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:36:43.294436  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:36:43.294492  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:36:43.350717  624674 cri.go:89] found id: ""
	I1202 22:36:43.350740  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.350749  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:36:43.350756  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:36:43.350821  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:36:43.386007  624674 cri.go:89] found id: ""
	I1202 22:36:43.386037  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.386051  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:36:43.386064  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:36:43.386123  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:36:43.426239  624674 cri.go:89] found id: ""
	I1202 22:36:43.426347  624674 logs.go:282] 0 containers: []
	W1202 22:36:43.426376  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:36:43.426417  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:36:43.426459  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:36:43.530151  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:36:43.530264  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:36:43.552835  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:36:43.553003  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:36:43.637996  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:36:43.638063  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:36:43.638085  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:36:43.680914  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:36:43.680957  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:36:46.213112  624674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:36:46.223075  624674 kubeadm.go:602] duration metric: took 4m3.106402046s to restartPrimaryControlPlane
	W1202 22:36:46.223151  624674 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 22:36:46.223225  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 22:36:46.635271  624674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:36:46.648215  624674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:36:46.656319  624674 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:36:46.656425  624674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:36:46.664298  624674 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:36:46.664320  624674 kubeadm.go:158] found existing configuration files:
	
	I1202 22:36:46.664398  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:36:46.672325  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:36:46.672399  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:36:46.679746  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:36:46.687372  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:36:46.687468  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:36:46.694739  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:36:46.702394  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:36:46.702456  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:36:46.709770  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:36:46.717351  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:36:46.717462  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:36:46.724863  624674 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:36:46.763080  624674 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:36:46.763342  624674 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:36:46.836882  624674 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:36:46.836982  624674 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:36:46.837040  624674 kubeadm.go:319] OS: Linux
	I1202 22:36:46.837087  624674 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:36:46.837169  624674 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:36:46.837230  624674 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:36:46.837283  624674 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:36:46.837335  624674 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:36:46.837385  624674 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:36:46.837434  624674 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:36:46.837485  624674 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:36:46.837533  624674 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:36:46.903251  624674 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:36:46.903387  624674 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:36:46.903669  624674 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:36:46.927476  624674 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:36:46.934573  624674 out.go:252]   - Generating certificates and keys ...
	I1202 22:36:46.934736  624674 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:36:46.934835  624674 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:36:46.934921  624674 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:36:46.934988  624674 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:36:46.935103  624674 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:36:46.935162  624674 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:36:46.935230  624674 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:36:46.935295  624674 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:36:46.935375  624674 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:36:46.935452  624674 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:36:46.935499  624674 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:36:46.935564  624674 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:36:46.991426  624674 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:36:47.928005  624674 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:36:48.055532  624674 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:36:48.262552  624674 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:36:48.377532  624674 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:36:48.378344  624674 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:36:48.381095  624674 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:36:48.384453  624674 out.go:252]   - Booting up control plane ...
	I1202 22:36:48.384573  624674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:36:48.384700  624674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:36:48.386688  624674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:36:48.402287  624674 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:36:48.402396  624674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:36:48.410875  624674 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:36:48.410975  624674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:36:48.411072  624674 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:36:48.545321  624674 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:36:48.545442  624674 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:40:48.545484  624674 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000304958s
	I1202 22:40:48.545529  624674 kubeadm.go:319] 
	I1202 22:40:48.545587  624674 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:40:48.545621  624674 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:40:48.545726  624674 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:40:48.545732  624674 kubeadm.go:319] 
	I1202 22:40:48.545837  624674 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:40:48.545869  624674 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:40:48.545900  624674 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:40:48.545904  624674 kubeadm.go:319] 
	I1202 22:40:48.549928  624674 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:40:48.550355  624674 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:40:48.550463  624674 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:40:48.550699  624674 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:40:48.550704  624674 kubeadm.go:319] 
	I1202 22:40:48.550773  624674 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 22:40:48.550873  624674 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000304958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000304958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 22:40:48.550953  624674 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 22:40:48.979170  624674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:40:48.996360  624674 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:40:48.996422  624674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:40:49.008975  624674 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:40:49.008993  624674 kubeadm.go:158] found existing configuration files:
	
	I1202 22:40:49.009050  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:40:49.019125  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:40:49.019248  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:40:49.027808  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:40:49.038713  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:40:49.038774  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:40:49.047423  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:40:49.057731  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:40:49.057794  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:40:49.067945  624674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:40:49.078077  624674 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:40:49.078192  624674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:40:49.086418  624674 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:40:49.136927  624674 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:40:49.137415  624674 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:40:49.243755  624674 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:40:49.243916  624674 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:40:49.243990  624674 kubeadm.go:319] OS: Linux
	I1202 22:40:49.244073  624674 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:40:49.244157  624674 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:40:49.244232  624674 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:40:49.244319  624674 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:40:49.244401  624674 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:40:49.244483  624674 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:40:49.244561  624674 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:40:49.244643  624674 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:40:49.244721  624674 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:40:49.323168  624674 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:40:49.323349  624674 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:40:49.323470  624674 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:40:49.355833  624674 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:40:49.361594  624674 out.go:252]   - Generating certificates and keys ...
	I1202 22:40:49.361766  624674 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:40:49.361884  624674 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:40:49.362013  624674 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:40:49.362083  624674 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:40:49.362158  624674 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:40:49.362216  624674 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:40:49.362283  624674 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:40:49.362349  624674 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:40:49.362495  624674 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:40:49.363113  624674 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:40:49.363673  624674 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:40:49.363955  624674 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:40:49.601352  624674 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:40:49.735261  624674 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:40:50.077133  624674 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:40:50.352408  624674 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:40:50.465001  624674 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:40:50.466246  624674 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:40:50.470133  624674 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:40:50.473501  624674 out.go:252]   - Booting up control plane ...
	I1202 22:40:50.473642  624674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:40:50.473743  624674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:40:50.473829  624674 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:40:50.490421  624674 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:40:50.490544  624674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:40:50.497882  624674 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:40:50.498172  624674 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:40:50.498383  624674 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:40:50.629555  624674 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:40:50.629681  624674 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:44:50.628323  624674 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000039886s
	I1202 22:44:50.628354  624674 kubeadm.go:319] 
	I1202 22:44:50.628412  624674 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:44:50.628446  624674 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:44:50.628550  624674 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:44:50.628556  624674 kubeadm.go:319] 
	I1202 22:44:50.628660  624674 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:44:50.628692  624674 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:44:50.628723  624674 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:44:50.628727  624674 kubeadm.go:319] 
	I1202 22:44:50.632463  624674 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:44:50.632946  624674 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:44:50.633069  624674 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:44:50.633358  624674 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:44:50.633368  624674 kubeadm.go:319] 
	I1202 22:44:50.633443  624674 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:44:50.633550  624674 kubeadm.go:403] duration metric: took 12m7.563871761s to StartCluster
	I1202 22:44:50.633589  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:44:50.633655  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:44:50.685998  624674 cri.go:89] found id: ""
	I1202 22:44:50.686021  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.686029  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:44:50.686036  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:44:50.686098  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:44:50.747622  624674 cri.go:89] found id: ""
	I1202 22:44:50.747647  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.747656  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:44:50.747663  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:44:50.747723  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:44:50.799247  624674 cri.go:89] found id: ""
	I1202 22:44:50.799270  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.799282  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:44:50.799290  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:44:50.799351  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:44:50.840068  624674 cri.go:89] found id: ""
	I1202 22:44:50.840090  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.840098  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:44:50.840105  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:44:50.840163  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:44:50.866362  624674 cri.go:89] found id: ""
	I1202 22:44:50.866383  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.866391  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:44:50.866397  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:44:50.866459  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:44:50.905098  624674 cri.go:89] found id: ""
	I1202 22:44:50.905122  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.905136  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:44:50.905142  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:44:50.905198  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:44:50.941742  624674 cri.go:89] found id: ""
	I1202 22:44:50.941764  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.941772  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:44:50.941779  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:44:50.941835  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:44:50.980178  624674 cri.go:89] found id: ""
	I1202 22:44:50.980199  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.980207  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:44:50.980216  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:44:50.980230  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:44:51.009676  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:44:51.009709  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:44:51.105772  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:44:51.105790  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:44:51.105805  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:44:51.159126  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:44:51.159161  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:44:51.201613  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:44:51.201638  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 22:44:51.286915  624674 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:44:51.286973  624674 out.go:285] * 
	* 
	W1202 22:44:51.287095  624674 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:44:51.287109  624674 out.go:285] * 
	* 
	W1202 22:44:51.289731  624674 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:44:51.298102  624674 out.go:203] 
	W1202 22:44:51.301953  624674 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:44:51.302062  624674 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:44:51.302123  624674 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:44:51.305722  624674 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-636006 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-636006 version --output=json: exit status 1 (116.632281ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-02 22:44:52.0876544 +0000 UTC m=+5803.758716943
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-636006
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-636006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c",
	        "Created": "2025-12-02T22:31:40.75345861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 625209,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:32:17.52093566Z",
	            "FinishedAt": "2025-12-02T22:32:15.291423225Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c/hosts",
	        "LogPath": "/var/lib/docker/containers/786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c/786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c-json.log",
	        "Name": "/kubernetes-upgrade-636006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-636006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-636006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786dac9ac5c9f6c7180bdbc1e436b5a0628113be0fcab888aa6a3ab65662f42c",
	                "LowerDir": "/var/lib/docker/overlay2/fb8aeac6a8276a84eb7821f5d17d4ef2c381ef8078bdc81c231c7cf7566d1f1a-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fb8aeac6a8276a84eb7821f5d17d4ef2c381ef8078bdc81c231c7cf7566d1f1a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fb8aeac6a8276a84eb7821f5d17d4ef2c381ef8078bdc81c231c7cf7566d1f1a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fb8aeac6a8276a84eb7821f5d17d4ef2c381ef8078bdc81c231c7cf7566d1f1a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-636006",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-636006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-636006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-636006",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-636006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f02ccf7ce4cc9dbaa5cd48267c87164dddb76be4cecef3a7af9af3c4c8cddae",
	            "SandboxKey": "/var/run/docker/netns/6f02ccf7ce4c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-636006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:56:6e:2c:10:7f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "98929eb7ab1f806b22643ab5a5b527cf0db78168de2a09c64cbfb29b349c1a9e",
	                    "EndpointID": "3266d084b3aeb83f03788cbed141cad7933ee73c28b721cbd686a2a6919a14a2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-636006",
	                        "786dac9ac5c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-636006 -n kubernetes-upgrade-636006
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-636006 -n kubernetes-upgrade-636006: exit status 2 (423.024304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-636006 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-245878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:30 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p missing-upgrade-825984 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-825984    │ jenkins │ v1.35.0 │ 02 Dec 25 22:30 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ delete  │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-245878 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │                     │
	│ stop    │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-245878 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │                     │
	│ delete  │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p missing-upgrade-825984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-825984    │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:32 UTC │
	│ stop    │ -p kubernetes-upgrade-636006                                                                                                                    │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │                     │
	│ delete  │ -p missing-upgrade-825984                                                                                                                       │ missing-upgrade-825984    │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p stopped-upgrade-013069 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-013069    │ jenkins │ v1.35.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:33 UTC │
	│ stop    │ stopped-upgrade-013069 stop                                                                                                                     │ stopped-upgrade-013069    │ jenkins │ v1.35.0 │ 02 Dec 25 22:33 UTC │ 02 Dec 25 22:33 UTC │
	│ start   │ -p stopped-upgrade-013069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-013069    │ jenkins │ v1.37.0 │ 02 Dec 25 22:33 UTC │ 02 Dec 25 22:37 UTC │
	│ delete  │ -p stopped-upgrade-013069                                                                                                                       │ stopped-upgrade-013069    │ jenkins │ v1.37.0 │ 02 Dec 25 22:37 UTC │ 02 Dec 25 22:37 UTC │
	│ start   │ -p running-upgrade-873899 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-873899    │ jenkins │ v1.35.0 │ 02 Dec 25 22:37 UTC │ 02 Dec 25 22:38 UTC │
	│ start   │ -p running-upgrade-873899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-873899    │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:42 UTC │
	│ delete  │ -p running-upgrade-873899                                                                                                                       │ running-upgrade-873899    │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ start   │ -p pause-618835 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:44 UTC │
	│ start   │ -p pause-618835 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:44 UTC │ 02 Dec 25 22:44 UTC │
	│ pause   │ -p pause-618835 --alsologtostderr -v=5                                                                                                          │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:44:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:44:21.772077  661046 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:44:21.772265  661046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:21.772298  661046 out.go:374] Setting ErrFile to fd 2...
	I1202 22:44:21.772315  661046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:21.772599  661046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:44:21.772984  661046 out.go:368] Setting JSON to false
	I1202 22:44:21.774176  661046 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15990,"bootTime":1764699472,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 22:44:21.774280  661046 start.go:143] virtualization:  
	I1202 22:44:21.777389  661046 out.go:179] * [pause-618835] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:44:21.781217  661046 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:44:21.781370  661046 notify.go:221] Checking for updates...
	I1202 22:44:21.787658  661046 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:44:21.790449  661046 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:44:21.793367  661046 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 22:44:21.796242  661046 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:44:21.799091  661046 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:44:21.802432  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:21.803069  661046 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:44:21.853224  661046 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:44:21.853415  661046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:21.912680  661046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:21.903202911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:21.912785  661046 docker.go:319] overlay module found
	I1202 22:44:21.915810  661046 out.go:179] * Using the docker driver based on existing profile
	I1202 22:44:21.918579  661046 start.go:309] selected driver: docker
	I1202 22:44:21.918598  661046 start.go:927] validating driver "docker" against &{Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:21.918734  661046 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:44:21.918838  661046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:21.986340  661046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:21.976842771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:21.986742  661046 cni.go:84] Creating CNI manager for ""
	I1202 22:44:21.986812  661046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:44:21.986865  661046 start.go:353] cluster config:
	{Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:21.991739  661046 out.go:179] * Starting "pause-618835" primary control-plane node in "pause-618835" cluster
	I1202 22:44:21.994531  661046 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 22:44:21.997589  661046 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:44:22.000564  661046 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 22:44:22.000717  661046 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 22:44:22.000743  661046 cache.go:65] Caching tarball of preloaded images
	I1202 22:44:22.000656  661046 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:44:22.001213  661046 preload.go:238] Found /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 22:44:22.001266  661046 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 22:44:22.001536  661046 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/config.json ...
	I1202 22:44:22.024507  661046 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:44:22.024534  661046 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:44:22.024549  661046 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:44:22.024584  661046 start.go:360] acquireMachinesLock for pause-618835: {Name:mke18653c2307ed5537ca2391ee1b331ce530ab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:44:22.024646  661046 start.go:364] duration metric: took 38.532µs to acquireMachinesLock for "pause-618835"
	I1202 22:44:22.024671  661046 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:44:22.024676  661046 fix.go:54] fixHost starting: 
	I1202 22:44:22.024950  661046 cli_runner.go:164] Run: docker container inspect pause-618835 --format={{.State.Status}}
	I1202 22:44:22.043037  661046 fix.go:112] recreateIfNeeded on pause-618835: state=Running err=<nil>
	W1202 22:44:22.043071  661046 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:44:22.046253  661046 out.go:252] * Updating the running docker "pause-618835" container ...
	I1202 22:44:22.046306  661046 machine.go:94] provisionDockerMachine start ...
	I1202 22:44:22.046465  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.064267  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.064602  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.064627  661046 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:44:22.214410  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-618835
	
	I1202 22:44:22.214484  661046 ubuntu.go:182] provisioning hostname "pause-618835"
	I1202 22:44:22.214603  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.236613  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.236939  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.236955  661046 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-618835 && echo "pause-618835" | sudo tee /etc/hostname
	I1202 22:44:22.400346  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-618835
	
	I1202 22:44:22.400434  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.429430  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.429764  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.429797  661046 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-618835' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-618835/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-618835' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:44:22.579235  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:44:22.579261  661046 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 22:44:22.579284  661046 ubuntu.go:190] setting up certificates
	I1202 22:44:22.579293  661046 provision.go:84] configureAuth start
	I1202 22:44:22.579352  661046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-618835
	I1202 22:44:22.596685  661046 provision.go:143] copyHostCerts
	I1202 22:44:22.596760  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 22:44:22.596778  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 22:44:22.596853  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 22:44:22.596973  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 22:44:22.596983  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 22:44:22.597014  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 22:44:22.597119  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 22:44:22.597130  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 22:44:22.597155  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 22:44:22.597213  661046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.pause-618835 san=[127.0.0.1 192.168.85.2 localhost minikube pause-618835]
	I1202 22:44:22.983637  661046 provision.go:177] copyRemoteCerts
	I1202 22:44:22.983707  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:44:22.983761  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:23.001895  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:23.106664  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 22:44:23.123704  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 22:44:23.141408  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 22:44:23.158215  661046 provision.go:87] duration metric: took 578.901326ms to configureAuth
	I1202 22:44:23.158243  661046 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:44:23.158477  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:23.158589  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:23.176100  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:23.176429  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:23.176448  661046 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 22:44:28.569900  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 22:44:28.569928  661046 machine.go:97] duration metric: took 6.523605112s to provisionDockerMachine
	I1202 22:44:28.569941  661046 start.go:293] postStartSetup for "pause-618835" (driver="docker")
	I1202 22:44:28.569952  661046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:44:28.570028  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:44:28.570073  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.587950  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.695041  661046 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:44:28.698567  661046 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:44:28.698595  661046 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:44:28.698607  661046 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 22:44:28.698664  661046 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 22:44:28.698757  661046 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 22:44:28.698862  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:44:28.706619  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:44:28.724907  661046 start.go:296] duration metric: took 154.950883ms for postStartSetup
	I1202 22:44:28.725007  661046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:44:28.725050  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.742944  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.844317  661046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:44:28.849218  661046 fix.go:56] duration metric: took 6.824535089s for fixHost
	I1202 22:44:28.849245  661046 start.go:83] releasing machines lock for "pause-618835", held for 6.824586601s
	I1202 22:44:28.849316  661046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-618835
	I1202 22:44:28.865850  661046 ssh_runner.go:195] Run: cat /version.json
	I1202 22:44:28.865915  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.866162  661046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:44:28.866214  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.884723  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.892719  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.990565  661046 ssh_runner.go:195] Run: systemctl --version
	I1202 22:44:29.095942  661046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 22:44:29.136291  661046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:44:29.140549  661046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:44:29.140626  661046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:44:29.149352  661046 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:44:29.149375  661046 start.go:496] detecting cgroup driver to use...
	I1202 22:44:29.149405  661046 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:44:29.149458  661046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 22:44:29.164433  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 22:44:29.177340  661046 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:44:29.177454  661046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:44:29.193150  661046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:44:29.205859  661046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:44:29.344604  661046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:44:29.473058  661046 docker.go:234] disabling docker service ...
	I1202 22:44:29.473200  661046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:44:29.488559  661046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:44:29.502024  661046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:44:29.637356  661046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:44:29.800358  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:44:29.814789  661046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:44:29.829527  661046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 22:44:29.829606  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.838693  661046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 22:44:29.838809  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.848300  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.857902  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.867485  661046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:44:29.876198  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.886566  661046 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.897113  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.906342  661046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:44:29.914235  661046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:44:29.921742  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:30.054347  661046 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 22:44:30.263291  661046 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 22:44:30.263379  661046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 22:44:30.267266  661046 start.go:564] Will wait 60s for crictl version
	I1202 22:44:30.267376  661046 ssh_runner.go:195] Run: which crictl
	I1202 22:44:30.270908  661046 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:44:30.295521  661046 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 22:44:30.295660  661046 ssh_runner.go:195] Run: crio --version
	I1202 22:44:30.328562  661046 ssh_runner.go:195] Run: crio --version
	I1202 22:44:30.364952  661046 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 22:44:30.367831  661046 cli_runner.go:164] Run: docker network inspect pause-618835 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:44:30.383864  661046 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:44:30.387835  661046 kubeadm.go:884] updating cluster {Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:44:30.387986  661046 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 22:44:30.388044  661046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:44:30.427855  661046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 22:44:30.427881  661046 crio.go:433] Images already preloaded, skipping extraction
	I1202 22:44:30.427941  661046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:44:30.460051  661046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 22:44:30.460076  661046 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:44:30.460085  661046 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 22:44:30.460195  661046 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-618835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:44:30.460284  661046 ssh_runner.go:195] Run: crio config
	I1202 22:44:30.522967  661046 cni.go:84] Creating CNI manager for ""
	I1202 22:44:30.522993  661046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:44:30.523027  661046 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:44:30.523050  661046 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-618835 NodeName:pause-618835 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:44:30.523182  661046 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-618835"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:44:30.523263  661046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 22:44:30.530844  661046 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:44:30.530942  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:44:30.538341  661046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 22:44:30.551648  661046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 22:44:30.564596  661046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 22:44:30.577069  661046 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:44:30.580727  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:30.703997  661046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:44:30.716780  661046 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835 for IP: 192.168.85.2
	I1202 22:44:30.716853  661046 certs.go:195] generating shared ca certs ...
	I1202 22:44:30.716880  661046 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:30.717060  661046 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 22:44:30.717130  661046 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 22:44:30.717172  661046 certs.go:257] generating profile certs ...
	I1202 22:44:30.717299  661046 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key
	I1202 22:44:30.717406  661046 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.key.1773daca
	I1202 22:44:30.717507  661046 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.key
	I1202 22:44:30.717663  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 22:44:30.717726  661046 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 22:44:30.717766  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:44:30.717819  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 22:44:30.717877  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:44:30.717924  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 22:44:30.718011  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:44:30.718867  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:44:30.740148  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:44:30.759607  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:44:30.777756  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 22:44:30.795266  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 22:44:30.812606  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:44:30.829845  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:44:30.847785  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 22:44:30.865490  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 22:44:30.882582  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 22:44:30.900054  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:44:30.917424  661046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:44:30.930080  661046 ssh_runner.go:195] Run: openssl version
	I1202 22:44:30.936710  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:44:30.944864  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:30.949450  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:30.949515  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:31.010598  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:44:31.029090  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 22:44:31.049679  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.054810  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.054882  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.147569  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 22:44:31.170321  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 22:44:31.257461  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.266924  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.267022  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.370520  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:44:31.382909  661046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:44:31.391520  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:44:31.453313  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:44:31.516119  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:44:31.579296  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:44:31.643958  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:44:31.703980  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:44:31.772148  661046 kubeadm.go:401] StartCluster: {Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:31.772275  661046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 22:44:31.772341  661046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:44:31.832530  661046 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:31.832557  661046 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:31.832562  661046 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:31.832565  661046 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:31.832570  661046 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:31.832574  661046 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:31.832577  661046 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:31.832580  661046 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:31.832583  661046 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:31.832590  661046 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:31.832594  661046 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:31.832597  661046 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:31.832601  661046 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:31.832604  661046 cri.go:89] found id: ""
	I1202 22:44:31.832653  661046 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 22:44:31.850816  661046 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:31Z" level=error msg="open /run/runc: no such file or directory"
	I1202 22:44:31.850899  661046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:44:31.860604  661046 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:44:31.860624  661046 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:44:31.860677  661046 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:44:31.876850  661046 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:44:31.877498  661046 kubeconfig.go:125] found "pause-618835" server: "https://192.168.85.2:8443"
	I1202 22:44:31.878317  661046 kapi.go:59] client config for pause-618835: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:44:31.878820  661046 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 22:44:31.878845  661046 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 22:44:31.878851  661046 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 22:44:31.878856  661046 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 22:44:31.878860  661046 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 22:44:31.879145  661046 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:44:31.899458  661046 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:44:31.899491  661046 kubeadm.go:602] duration metric: took 38.860784ms to restartPrimaryControlPlane
	I1202 22:44:31.899500  661046 kubeadm.go:403] duration metric: took 127.364099ms to StartCluster
	I1202 22:44:31.899517  661046 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:31.899581  661046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:44:31.900441  661046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:31.900682  661046 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 22:44:31.901008  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:31.901066  661046 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:44:31.904906  661046 out.go:179] * Enabled addons: 
	I1202 22:44:31.904973  661046 out.go:179] * Verifying Kubernetes components...
	I1202 22:44:31.907865  661046 addons.go:530] duration metric: took 6.798498ms for enable addons: enabled=[]
	I1202 22:44:31.907952  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:32.213405  661046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:44:32.235526  661046 node_ready.go:35] waiting up to 6m0s for node "pause-618835" to be "Ready" ...
	I1202 22:44:35.909622  661046 node_ready.go:49] node "pause-618835" is "Ready"
	I1202 22:44:35.909697  661046 node_ready.go:38] duration metric: took 3.674124436s for node "pause-618835" to be "Ready" ...
	I1202 22:44:35.909726  661046 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:44:35.909814  661046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:44:35.929353  661046 api_server.go:72] duration metric: took 4.028633544s to wait for apiserver process to appear ...
	I1202 22:44:35.929419  661046 api_server.go:88] waiting for apiserver healthz status ...
	I1202 22:44:35.929461  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:35.952358  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 22:44:35.952443  661046 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 22:44:36.430070  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:36.442739  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 22:44:36.442817  661046 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 22:44:36.930397  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:36.938429  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 22:44:36.940086  661046 api_server.go:141] control plane version: v1.34.2
	I1202 22:44:36.940124  661046 api_server.go:131] duration metric: took 1.010684624s to wait for apiserver health ...
	I1202 22:44:36.940133  661046 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 22:44:36.953714  661046 system_pods.go:59] 7 kube-system pods found
	I1202 22:44:36.953756  661046 system_pods.go:61] "coredns-66bc5c9577-q74fb" [7d073b63-2a81-4541-b874-7d4a252db1eb] Running
	I1202 22:44:36.953767  661046 system_pods.go:61] "etcd-pause-618835" [5f7497f0-8f59-4dbf-bee6-7c7f5cf4e0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:44:36.953772  661046 system_pods.go:61] "kindnet-6zfrp" [05e122c2-9293-4a8e-98a3-5e285bd382ac] Running
	I1202 22:44:36.953780  661046 system_pods.go:61] "kube-apiserver-pause-618835" [1dd916ed-fdd3-426e-a947-9388a4f75333] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:44:36.953785  661046 system_pods.go:61] "kube-controller-manager-pause-618835" [4f039b5f-efb1-41ee-8e37-5164dc4b9dda] Running
	I1202 22:44:36.953789  661046 system_pods.go:61] "kube-proxy-ntbkx" [9329beee-5733-4a02-9057-d0a11df8846c] Running
	I1202 22:44:36.953801  661046 system_pods.go:61] "kube-scheduler-pause-618835" [2c774952-fe5f-471d-bd1f-4464a17be190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:44:36.953813  661046 system_pods.go:74] duration metric: took 13.674774ms to wait for pod list to return data ...
	I1202 22:44:36.953824  661046 default_sa.go:34] waiting for default service account to be created ...
	I1202 22:44:36.979958  661046 default_sa.go:45] found service account: "default"
	I1202 22:44:36.979988  661046 default_sa.go:55] duration metric: took 26.153398ms for default service account to be created ...
	I1202 22:44:36.980007  661046 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 22:44:36.983586  661046 system_pods.go:86] 7 kube-system pods found
	I1202 22:44:36.983626  661046 system_pods.go:89] "coredns-66bc5c9577-q74fb" [7d073b63-2a81-4541-b874-7d4a252db1eb] Running
	I1202 22:44:36.983636  661046 system_pods.go:89] "etcd-pause-618835" [5f7497f0-8f59-4dbf-bee6-7c7f5cf4e0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:44:36.983643  661046 system_pods.go:89] "kindnet-6zfrp" [05e122c2-9293-4a8e-98a3-5e285bd382ac] Running
	I1202 22:44:36.983651  661046 system_pods.go:89] "kube-apiserver-pause-618835" [1dd916ed-fdd3-426e-a947-9388a4f75333] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:44:36.983656  661046 system_pods.go:89] "kube-controller-manager-pause-618835" [4f039b5f-efb1-41ee-8e37-5164dc4b9dda] Running
	I1202 22:44:36.983660  661046 system_pods.go:89] "kube-proxy-ntbkx" [9329beee-5733-4a02-9057-d0a11df8846c] Running
	I1202 22:44:36.983668  661046 system_pods.go:89] "kube-scheduler-pause-618835" [2c774952-fe5f-471d-bd1f-4464a17be190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:44:36.983676  661046 system_pods.go:126] duration metric: took 3.662768ms to wait for k8s-apps to be running ...
	I1202 22:44:36.983695  661046 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 22:44:36.983753  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:36.997015  661046 system_svc.go:56] duration metric: took 13.311979ms WaitForService to wait for kubelet
	I1202 22:44:36.997060  661046 kubeadm.go:587] duration metric: took 5.096345819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:44:36.997082  661046 node_conditions.go:102] verifying NodePressure condition ...
	I1202 22:44:37.004213  661046 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 22:44:37.004260  661046 node_conditions.go:123] node cpu capacity is 2
	I1202 22:44:37.004276  661046 node_conditions.go:105] duration metric: took 7.186342ms to run NodePressure ...
	I1202 22:44:37.004292  661046 start.go:242] waiting for startup goroutines ...
	I1202 22:44:37.004307  661046 start.go:247] waiting for cluster config update ...
	I1202 22:44:37.004317  661046 start.go:256] writing updated cluster config ...
	I1202 22:44:37.004731  661046 ssh_runner.go:195] Run: rm -f paused
	I1202 22:44:37.010102  661046 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:44:37.010946  661046 kapi.go:59] client config for pause-618835: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:44:37.016129  661046 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q74fb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:37.024271  661046 pod_ready.go:94] pod "coredns-66bc5c9577-q74fb" is "Ready"
	I1202 22:44:37.024312  661046 pod_ready.go:86] duration metric: took 8.141921ms for pod "coredns-66bc5c9577-q74fb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:37.027905  661046 pod_ready.go:83] waiting for pod "etcd-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:44:39.034075  661046 pod_ready.go:104] pod "etcd-pause-618835" is not "Ready", error: <nil>
	W1202 22:44:41.533868  661046 pod_ready.go:104] pod "etcd-pause-618835" is not "Ready", error: <nil>
	I1202 22:44:43.034100  661046 pod_ready.go:94] pod "etcd-pause-618835" is "Ready"
	I1202 22:44:43.034126  661046 pod_ready.go:86] duration metric: took 6.006192029s for pod "etcd-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:43.036482  661046 pod_ready.go:83] waiting for pod "kube-apiserver-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:44:45.049393  661046 pod_ready.go:104] pod "kube-apiserver-pause-618835" is not "Ready", error: <nil>
	I1202 22:44:45.541910  661046 pod_ready.go:94] pod "kube-apiserver-pause-618835" is "Ready"
	I1202 22:44:45.541937  661046 pod_ready.go:86] duration metric: took 2.505424103s for pod "kube-apiserver-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.544357  661046 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.548883  661046 pod_ready.go:94] pod "kube-controller-manager-pause-618835" is "Ready"
	I1202 22:44:45.548909  661046 pod_ready.go:86] duration metric: took 4.53038ms for pod "kube-controller-manager-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.551398  661046 pod_ready.go:83] waiting for pod "kube-proxy-ntbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.558313  661046 pod_ready.go:94] pod "kube-proxy-ntbkx" is "Ready"
	I1202 22:44:45.558369  661046 pod_ready.go:86] duration metric: took 6.871073ms for pod "kube-proxy-ntbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.563355  661046 pod_ready.go:83] waiting for pod "kube-scheduler-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.832191  661046 pod_ready.go:94] pod "kube-scheduler-pause-618835" is "Ready"
	I1202 22:44:45.832219  661046 pod_ready.go:86] duration metric: took 268.838109ms for pod "kube-scheduler-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.832233  661046 pod_ready.go:40] duration metric: took 8.822083917s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:44:45.883405  661046 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 22:44:45.886643  661046 out.go:179] * Done! kubectl is now configured to use "pause-618835" cluster and "default" namespace by default
	I1202 22:44:50.628323  624674 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000039886s
	I1202 22:44:50.628354  624674 kubeadm.go:319] 
	I1202 22:44:50.628412  624674 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:44:50.628446  624674 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:44:50.628550  624674 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:44:50.628556  624674 kubeadm.go:319] 
	I1202 22:44:50.628660  624674 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:44:50.628692  624674 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:44:50.628723  624674 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:44:50.628727  624674 kubeadm.go:319] 
	I1202 22:44:50.632463  624674 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:44:50.632946  624674 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:44:50.633069  624674 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:44:50.633358  624674 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:44:50.633368  624674 kubeadm.go:319] 
	I1202 22:44:50.633443  624674 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:44:50.633550  624674 kubeadm.go:403] duration metric: took 12m7.563871761s to StartCluster
	I1202 22:44:50.633589  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:44:50.633655  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:44:50.685998  624674 cri.go:89] found id: ""
	I1202 22:44:50.686021  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.686029  624674 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:44:50.686036  624674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 22:44:50.686098  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:44:50.747622  624674 cri.go:89] found id: ""
	I1202 22:44:50.747647  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.747656  624674 logs.go:284] No container was found matching "etcd"
	I1202 22:44:50.747663  624674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 22:44:50.747723  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:44:50.799247  624674 cri.go:89] found id: ""
	I1202 22:44:50.799270  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.799282  624674 logs.go:284] No container was found matching "coredns"
	I1202 22:44:50.799290  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:44:50.799351  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:44:50.840068  624674 cri.go:89] found id: ""
	I1202 22:44:50.840090  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.840098  624674 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:44:50.840105  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:44:50.840163  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:44:50.866362  624674 cri.go:89] found id: ""
	I1202 22:44:50.866383  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.866391  624674 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:44:50.866397  624674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:44:50.866459  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:44:50.905098  624674 cri.go:89] found id: ""
	I1202 22:44:50.905122  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.905136  624674 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:44:50.905142  624674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 22:44:50.905198  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:44:50.941742  624674 cri.go:89] found id: ""
	I1202 22:44:50.941764  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.941772  624674 logs.go:284] No container was found matching "kindnet"
	I1202 22:44:50.941779  624674 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:44:50.941835  624674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:44:50.980178  624674 cri.go:89] found id: ""
	I1202 22:44:50.980199  624674 logs.go:282] 0 containers: []
	W1202 22:44:50.980207  624674 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:44:50.980216  624674 logs.go:123] Gathering logs for dmesg ...
	I1202 22:44:50.980230  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:44:51.009676  624674 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:44:51.009709  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:44:51.105772  624674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:44:51.105790  624674 logs.go:123] Gathering logs for CRI-O ...
	I1202 22:44:51.105805  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 22:44:51.159126  624674 logs.go:123] Gathering logs for container status ...
	I1202 22:44:51.159161  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:44:51.201613  624674 logs.go:123] Gathering logs for kubelet ...
	I1202 22:44:51.201638  624674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 22:44:51.286915  624674 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:44:51.286973  624674 out.go:285] * 
	W1202 22:44:51.287095  624674 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:44:51.287109  624674 out.go:285] * 
	W1202 22:44:51.289731  624674 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:44:51.298102  624674 out.go:203] 
	W1202 22:44:51.301953  624674 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000039886s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:44:51.302062  624674 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:44:51.302123  624674 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:44:51.305722  624674 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.599845856Z" level=info msg="Neither image nor artfiact registry.k8s.io/coredns/coredns:v1.13.1 found" id=5a117f8c-2690-4aca-a232-99f5e1c33216 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.610290917Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=35b8b1ed-455e-4cb8-a0d4-67df0613b8fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.610443288Z" level=info msg="Image registry.k8s.io/kube-proxy:v1.35.0-beta.0 not found" id=35b8b1ed-455e-4cb8-a0d4-67df0613b8fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.610481385Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-proxy:v1.35.0-beta.0 found" id=35b8b1ed-455e-4cb8-a0d4-67df0613b8fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.671715688Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=86cb3f2f-2376-4728-8bdf-0597926b0d40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.671881696Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=86cb3f2f-2376-4728-8bdf-0597926b0d40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.671935079Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=86cb3f2f-2376-4728-8bdf-0597926b0d40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.755172502Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=098a46b5-9d65-4bd0-9c14-2aef6151e2ad name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.755371404Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=098a46b5-9d65-4bd0-9c14-2aef6151e2ad name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:25 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:25.755422506Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=098a46b5-9d65-4bd0-9c14-2aef6151e2ad name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:32:26 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:32:26.156622786Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9234e6be-6906-4601-9d5b-bc988133d80b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.907184425Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f559693a-513f-4392-a644-005b11f0518d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.915299715Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d43184eb-0372-435d-b3d5-0f65be86a353 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.916689745Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=71185138-38c0-4d64-af10-3dd2474156d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.91800935Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=99eaae30-600b-48df-babb-09c1df697c6f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.918786688Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0f14296f-5f5a-4258-9e08-5bdec5d5820f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.920299763Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=197e61f5-351b-4e21-a050-423917842915 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:36:46 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:36:46.922416086Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=2e37698e-69bb-401b-b840-d8e76dbd6cfa name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.332518384Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=394e2676-9d79-4b78-b2f5-29361ff5dc72 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.335907123Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7a32b8d9-3113-4472-9adf-e19171e023c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.3379335Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d3e20b23-6502-4106-a8a6-8d29495f0719 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.349418746Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ab964848-b814-49dd-b188-6cc8efc672fa name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.350454884Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a1e641a0-682f-475e-b1a2-ffd6e13350d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.352276442Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=81084695-0087-43e5-a7a6-e2bff5589810 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 22:40:49 kubernetes-upgrade-636006 crio[614]: time="2025-12-02T22:40:49.353355379Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0b70cfb8-2435-45be-9762-01a7b072ceee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 22:09] overlayfs: idmapped layers are currently not supported
	[  +2.910244] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:10] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:11] overlayfs: idmapped layers are currently not supported
	[ +41.264115] hrtimer: interrupt took 8638023 ns
	[Dec 2 22:12] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:17] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:18] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:21] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:23] overlayfs: idmapped layers are currently not supported
	[ +16.312722] overlayfs: idmapped layers are currently not supported
	[  +9.098621] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:25] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:26] overlayfs: idmapped layers are currently not supported
	[ +25.910639] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:27] kauditd_printk_skb: 8 callbacks suppressed
	[ +17.250662] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:28] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:30] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:32] overlayfs: idmapped layers are currently not supported
	[ +24.664804] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:43] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:44:53 up  4:27,  0 user,  load average: 1.55, 1.42, 1.61
	Linux kubernetes-upgrade-636006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:44:50 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:44:51 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 02 22:44:51 kubernetes-upgrade-636006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:44:51 kubernetes-upgrade-636006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:44:51 kubernetes-upgrade-636006 kubelet[12856]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:44:51 kubernetes-upgrade-636006 kubelet[12856]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:44:51 kubernetes-upgrade-636006 kubelet[12856]: E1202 22:44:51.779302   12856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:44:51 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:44:51 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:44:52 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 02 22:44:52 kubernetes-upgrade-636006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:44:52 kubernetes-upgrade-636006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:44:52 kubernetes-upgrade-636006 kubelet[12875]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:44:52 kubernetes-upgrade-636006 kubelet[12875]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:44:52 kubernetes-upgrade-636006 kubelet[12875]: E1202 22:44:52.491424   12875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:44:52 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:44:52 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:44:53 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 02 22:44:53 kubernetes-upgrade-636006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:44:53 kubernetes-upgrade-636006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:44:53 kubernetes-upgrade-636006 kubelet[12944]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:44:53 kubernetes-upgrade-636006 kubelet[12944]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 22:44:53 kubernetes-upgrade-636006 kubelet[12944]: E1202 22:44:53.220804   12944 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:44:53 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:44:53 kubernetes-upgrade-636006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-636006 -n kubernetes-upgrade-636006
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-636006 -n kubernetes-upgrade-636006: exit status 2 (414.740087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-636006" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-636006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-636006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-636006: (2.550240835s)
--- FAIL: TestKubernetesUpgrade (802.23s)

                                                
                                    
x
+
TestPause/serial/Pause (7.58s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-618835 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-618835 --alsologtostderr -v=5: exit status 80 (2.380237929s)

                                                
                                                
-- stdout --
	* Pausing node pause-618835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:44:45.971354  662318 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:44:45.971549  662318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:45.971576  662318 out.go:374] Setting ErrFile to fd 2...
	I1202 22:44:45.971596  662318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:45.971888  662318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:44:45.972178  662318 out.go:368] Setting JSON to false
	I1202 22:44:45.972228  662318 mustload.go:66] Loading cluster: pause-618835
	I1202 22:44:45.972708  662318 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:45.973254  662318 cli_runner.go:164] Run: docker container inspect pause-618835 --format={{.State.Status}}
	I1202 22:44:45.990002  662318 host.go:66] Checking if "pause-618835" exists ...
	I1202 22:44:45.990316  662318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:46.057178  662318 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:46.047733247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:46.057862  662318 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-618835 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 22:44:46.060824  662318 out.go:179] * Pausing node pause-618835 ... 
	I1202 22:44:46.064402  662318 host.go:66] Checking if "pause-618835" exists ...
	I1202 22:44:46.064762  662318 ssh_runner.go:195] Run: systemctl --version
	I1202 22:44:46.064819  662318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:46.082453  662318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:46.187253  662318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:46.200732  662318 pause.go:52] kubelet running: true
	I1202 22:44:46.200815  662318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 22:44:46.430228  662318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 22:44:46.430353  662318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 22:44:46.494385  662318 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:46.494413  662318 cri.go:89] found id: "9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b"
	I1202 22:44:46.494418  662318 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:46.494422  662318 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:46.494425  662318 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:46.494428  662318 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:46.494432  662318 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:46.494434  662318 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:46.494439  662318 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:46.494445  662318 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:46.494448  662318 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:46.494451  662318 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:46.494455  662318 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:46.494458  662318 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:46.494461  662318 cri.go:89] found id: ""
	I1202 22:44:46.494516  662318 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 22:44:46.506265  662318 retry.go:31] will retry after 191.613249ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:46Z" level=error msg="open /run/runc: no such file or directory"
	I1202 22:44:46.698784  662318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:46.711841  662318 pause.go:52] kubelet running: false
	I1202 22:44:46.711906  662318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 22:44:46.846111  662318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 22:44:46.846192  662318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 22:44:46.929750  662318 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:46.929781  662318 cri.go:89] found id: "9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b"
	I1202 22:44:46.929786  662318 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:46.929789  662318 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:46.929792  662318 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:46.929796  662318 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:46.929800  662318 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:46.929803  662318 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:46.929806  662318 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:46.929812  662318 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:46.929816  662318 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:46.929818  662318 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:46.929822  662318 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:46.929825  662318 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:46.929828  662318 cri.go:89] found id: ""
	I1202 22:44:46.929886  662318 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 22:44:46.943779  662318 retry.go:31] will retry after 238.703947ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:46Z" level=error msg="open /run/runc: no such file or directory"
	I1202 22:44:47.183277  662318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:47.196094  662318 pause.go:52] kubelet running: false
	I1202 22:44:47.196168  662318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 22:44:47.337519  662318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 22:44:47.337645  662318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 22:44:47.402712  662318 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:47.402745  662318 cri.go:89] found id: "9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b"
	I1202 22:44:47.402750  662318 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:47.402754  662318 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:47.402757  662318 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:47.402761  662318 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:47.402764  662318 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:47.402767  662318 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:47.402771  662318 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:47.402788  662318 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:47.402803  662318 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:47.402806  662318 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:47.402809  662318 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:47.402816  662318 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:47.402821  662318 cri.go:89] found id: ""
	I1202 22:44:47.402885  662318 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 22:44:47.413936  662318 retry.go:31] will retry after 635.542505ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:47Z" level=error msg="open /run/runc: no such file or directory"
	I1202 22:44:48.049798  662318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:48.063824  662318 pause.go:52] kubelet running: false
	I1202 22:44:48.063890  662318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 22:44:48.200139  662318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 22:44:48.200277  662318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 22:44:48.270468  662318 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:48.270493  662318 cri.go:89] found id: "9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b"
	I1202 22:44:48.270498  662318 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:48.270502  662318 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:48.270506  662318 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:48.270510  662318 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:48.270513  662318 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:48.270515  662318 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:48.270518  662318 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:48.270524  662318 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:48.270527  662318 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:48.270530  662318 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:48.270533  662318 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:48.270536  662318 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:48.270539  662318 cri.go:89] found id: ""
	I1202 22:44:48.270593  662318 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 22:44:48.285504  662318 out.go:203] 
	W1202 22:44:48.288416  662318 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 22:44:48.288438  662318 out.go:285] * 
	* 
	W1202 22:44:48.294791  662318 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:44:48.297582  662318 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-618835 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-618835
helpers_test.go:243: (dbg) docker inspect pause-618835:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71",
	        "Created": "2025-12-02T22:43:04.500240363Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 658444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:43:04.536118217Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/hostname",
	        "HostsPath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/hosts",
	        "LogPath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71-json.log",
	        "Name": "/pause-618835",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-618835:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-618835",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71",
	                "LowerDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-618835",
	                "Source": "/var/lib/docker/volumes/pause-618835/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-618835",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-618835",
	                "name.minikube.sigs.k8s.io": "pause-618835",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f12e75d3b702491bdddd35bc9a105023156ef7ca7196bfae2c6f8683b462e21",
	            "SandboxKey": "/var/run/docker/netns/8f12e75d3b70",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-618835": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:a6:84:2b:11:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed72467687ca4d747c2a4bec2bff64ff8e3ce49a32e16d096cf183a20cf3652f",
	                    "EndpointID": "cc85a707800f542edab5fc6d4c58469845cda4b90ec8cd4ee6c7c161b58dab84",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-618835",
	                        "92a62975dc25"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-618835 -n pause-618835
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-618835 -n pause-618835: exit status 2 (360.945367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-618835 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-618835 logs -n 25: (1.38210043s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-245878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:30 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p missing-upgrade-825984 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-825984    │ jenkins │ v1.35.0 │ 02 Dec 25 22:30 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ delete  │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-245878 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │                     │
	│ stop    │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-245878 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │                     │
	│ delete  │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p missing-upgrade-825984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-825984    │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:32 UTC │
	│ stop    │ -p kubernetes-upgrade-636006                                                                                                                    │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │                     │
	│ delete  │ -p missing-upgrade-825984                                                                                                                       │ missing-upgrade-825984    │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p stopped-upgrade-013069 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-013069    │ jenkins │ v1.35.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:33 UTC │
	│ stop    │ stopped-upgrade-013069 stop                                                                                                                     │ stopped-upgrade-013069    │ jenkins │ v1.35.0 │ 02 Dec 25 22:33 UTC │ 02 Dec 25 22:33 UTC │
	│ start   │ -p stopped-upgrade-013069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-013069    │ jenkins │ v1.37.0 │ 02 Dec 25 22:33 UTC │ 02 Dec 25 22:37 UTC │
	│ delete  │ -p stopped-upgrade-013069                                                                                                                       │ stopped-upgrade-013069    │ jenkins │ v1.37.0 │ 02 Dec 25 22:37 UTC │ 02 Dec 25 22:37 UTC │
	│ start   │ -p running-upgrade-873899 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-873899    │ jenkins │ v1.35.0 │ 02 Dec 25 22:37 UTC │ 02 Dec 25 22:38 UTC │
	│ start   │ -p running-upgrade-873899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-873899    │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:42 UTC │
	│ delete  │ -p running-upgrade-873899                                                                                                                       │ running-upgrade-873899    │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ start   │ -p pause-618835 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:44 UTC │
	│ start   │ -p pause-618835 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:44 UTC │ 02 Dec 25 22:44 UTC │
	│ pause   │ -p pause-618835 --alsologtostderr -v=5                                                                                                          │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:44:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:44:21.772077  661046 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:44:21.772265  661046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:21.772298  661046 out.go:374] Setting ErrFile to fd 2...
	I1202 22:44:21.772315  661046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:21.772599  661046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:44:21.772984  661046 out.go:368] Setting JSON to false
	I1202 22:44:21.774176  661046 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15990,"bootTime":1764699472,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 22:44:21.774280  661046 start.go:143] virtualization:  
	I1202 22:44:21.777389  661046 out.go:179] * [pause-618835] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:44:21.781217  661046 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:44:21.781370  661046 notify.go:221] Checking for updates...
	I1202 22:44:21.787658  661046 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:44:21.790449  661046 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:44:21.793367  661046 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 22:44:21.796242  661046 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:44:21.799091  661046 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:44:21.802432  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:21.803069  661046 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:44:21.853224  661046 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:44:21.853415  661046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:21.912680  661046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:21.903202911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:21.912785  661046 docker.go:319] overlay module found
	I1202 22:44:21.915810  661046 out.go:179] * Using the docker driver based on existing profile
	I1202 22:44:21.918579  661046 start.go:309] selected driver: docker
	I1202 22:44:21.918598  661046 start.go:927] validating driver "docker" against &{Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:21.918734  661046 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:44:21.918838  661046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:21.986340  661046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:21.976842771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:21.986742  661046 cni.go:84] Creating CNI manager for ""
	I1202 22:44:21.986812  661046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:44:21.986865  661046 start.go:353] cluster config:
	{Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:21.991739  661046 out.go:179] * Starting "pause-618835" primary control-plane node in "pause-618835" cluster
	I1202 22:44:21.994531  661046 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 22:44:21.997589  661046 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:44:22.000564  661046 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 22:44:22.000717  661046 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 22:44:22.000743  661046 cache.go:65] Caching tarball of preloaded images
	I1202 22:44:22.000656  661046 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:44:22.001213  661046 preload.go:238] Found /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 22:44:22.001266  661046 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 22:44:22.001536  661046 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/config.json ...
	I1202 22:44:22.024507  661046 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:44:22.024534  661046 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:44:22.024549  661046 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:44:22.024584  661046 start.go:360] acquireMachinesLock for pause-618835: {Name:mke18653c2307ed5537ca2391ee1b331ce530ab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:44:22.024646  661046 start.go:364] duration metric: took 38.532µs to acquireMachinesLock for "pause-618835"
	I1202 22:44:22.024671  661046 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:44:22.024676  661046 fix.go:54] fixHost starting: 
	I1202 22:44:22.024950  661046 cli_runner.go:164] Run: docker container inspect pause-618835 --format={{.State.Status}}
	I1202 22:44:22.043037  661046 fix.go:112] recreateIfNeeded on pause-618835: state=Running err=<nil>
	W1202 22:44:22.043071  661046 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:44:22.046253  661046 out.go:252] * Updating the running docker "pause-618835" container ...
	I1202 22:44:22.046306  661046 machine.go:94] provisionDockerMachine start ...
	I1202 22:44:22.046465  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.064267  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.064602  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.064627  661046 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:44:22.214410  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-618835
	
	I1202 22:44:22.214484  661046 ubuntu.go:182] provisioning hostname "pause-618835"
	I1202 22:44:22.214603  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.236613  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.236939  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.236955  661046 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-618835 && echo "pause-618835" | sudo tee /etc/hostname
	I1202 22:44:22.400346  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-618835
	
	I1202 22:44:22.400434  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.429430  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.429764  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.429797  661046 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-618835' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-618835/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-618835' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:44:22.579235  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:44:22.579261  661046 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 22:44:22.579284  661046 ubuntu.go:190] setting up certificates
	I1202 22:44:22.579293  661046 provision.go:84] configureAuth start
	I1202 22:44:22.579352  661046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-618835
	I1202 22:44:22.596685  661046 provision.go:143] copyHostCerts
	I1202 22:44:22.596760  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 22:44:22.596778  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 22:44:22.596853  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 22:44:22.596973  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 22:44:22.596983  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 22:44:22.597014  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 22:44:22.597119  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 22:44:22.597130  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 22:44:22.597155  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 22:44:22.597213  661046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.pause-618835 san=[127.0.0.1 192.168.85.2 localhost minikube pause-618835]
	I1202 22:44:22.983637  661046 provision.go:177] copyRemoteCerts
	I1202 22:44:22.983707  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:44:22.983761  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:23.001895  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:23.106664  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 22:44:23.123704  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 22:44:23.141408  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 22:44:23.158215  661046 provision.go:87] duration metric: took 578.901326ms to configureAuth
	I1202 22:44:23.158243  661046 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:44:23.158477  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:23.158589  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:23.176100  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:23.176429  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:23.176448  661046 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 22:44:28.569900  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 22:44:28.569928  661046 machine.go:97] duration metric: took 6.523605112s to provisionDockerMachine
	I1202 22:44:28.569941  661046 start.go:293] postStartSetup for "pause-618835" (driver="docker")
	I1202 22:44:28.569952  661046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:44:28.570028  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:44:28.570073  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.587950  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.695041  661046 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:44:28.698567  661046 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:44:28.698595  661046 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:44:28.698607  661046 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 22:44:28.698664  661046 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 22:44:28.698757  661046 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 22:44:28.698862  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:44:28.706619  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:44:28.724907  661046 start.go:296] duration metric: took 154.950883ms for postStartSetup
	I1202 22:44:28.725007  661046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:44:28.725050  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.742944  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.844317  661046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:44:28.849218  661046 fix.go:56] duration metric: took 6.824535089s for fixHost
	I1202 22:44:28.849245  661046 start.go:83] releasing machines lock for "pause-618835", held for 6.824586601s
	I1202 22:44:28.849316  661046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-618835
	I1202 22:44:28.865850  661046 ssh_runner.go:195] Run: cat /version.json
	I1202 22:44:28.865915  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.866162  661046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:44:28.866214  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.884723  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.892719  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.990565  661046 ssh_runner.go:195] Run: systemctl --version
	I1202 22:44:29.095942  661046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 22:44:29.136291  661046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:44:29.140549  661046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:44:29.140626  661046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:44:29.149352  661046 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:44:29.149375  661046 start.go:496] detecting cgroup driver to use...
	I1202 22:44:29.149405  661046 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:44:29.149458  661046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 22:44:29.164433  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 22:44:29.177340  661046 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:44:29.177454  661046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:44:29.193150  661046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:44:29.205859  661046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:44:29.344604  661046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:44:29.473058  661046 docker.go:234] disabling docker service ...
	I1202 22:44:29.473200  661046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:44:29.488559  661046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:44:29.502024  661046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:44:29.637356  661046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:44:29.800358  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:44:29.814789  661046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:44:29.829527  661046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 22:44:29.829606  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.838693  661046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 22:44:29.838809  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.848300  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.857902  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.867485  661046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:44:29.876198  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.886566  661046 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.897113  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.906342  661046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:44:29.914235  661046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:44:29.921742  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:30.054347  661046 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 22:44:30.263291  661046 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 22:44:30.263379  661046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 22:44:30.267266  661046 start.go:564] Will wait 60s for crictl version
	I1202 22:44:30.267376  661046 ssh_runner.go:195] Run: which crictl
	I1202 22:44:30.270908  661046 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:44:30.295521  661046 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 22:44:30.295660  661046 ssh_runner.go:195] Run: crio --version
	I1202 22:44:30.328562  661046 ssh_runner.go:195] Run: crio --version
	I1202 22:44:30.364952  661046 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 22:44:30.367831  661046 cli_runner.go:164] Run: docker network inspect pause-618835 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:44:30.383864  661046 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:44:30.387835  661046 kubeadm.go:884] updating cluster {Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:44:30.387986  661046 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 22:44:30.388044  661046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:44:30.427855  661046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 22:44:30.427881  661046 crio.go:433] Images already preloaded, skipping extraction
	I1202 22:44:30.427941  661046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:44:30.460051  661046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 22:44:30.460076  661046 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:44:30.460085  661046 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 22:44:30.460195  661046 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-618835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:44:30.460284  661046 ssh_runner.go:195] Run: crio config
	I1202 22:44:30.522967  661046 cni.go:84] Creating CNI manager for ""
	I1202 22:44:30.522993  661046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:44:30.523027  661046 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:44:30.523050  661046 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-618835 NodeName:pause-618835 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:44:30.523182  661046 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-618835"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:44:30.523263  661046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 22:44:30.530844  661046 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:44:30.530942  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:44:30.538341  661046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 22:44:30.551648  661046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 22:44:30.564596  661046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 22:44:30.577069  661046 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:44:30.580727  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:30.703997  661046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:44:30.716780  661046 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835 for IP: 192.168.85.2
	I1202 22:44:30.716853  661046 certs.go:195] generating shared ca certs ...
	I1202 22:44:30.716880  661046 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:30.717060  661046 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 22:44:30.717130  661046 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 22:44:30.717172  661046 certs.go:257] generating profile certs ...
	I1202 22:44:30.717299  661046 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key
	I1202 22:44:30.717406  661046 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.key.1773daca
	I1202 22:44:30.717507  661046 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.key
	I1202 22:44:30.717663  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 22:44:30.717726  661046 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 22:44:30.717766  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:44:30.717819  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 22:44:30.717877  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:44:30.717924  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 22:44:30.718011  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:44:30.718867  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:44:30.740148  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:44:30.759607  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:44:30.777756  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 22:44:30.795266  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 22:44:30.812606  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:44:30.829845  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:44:30.847785  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 22:44:30.865490  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 22:44:30.882582  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 22:44:30.900054  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:44:30.917424  661046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:44:30.930080  661046 ssh_runner.go:195] Run: openssl version
	I1202 22:44:30.936710  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:44:30.944864  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:30.949450  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:30.949515  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:31.010598  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:44:31.029090  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 22:44:31.049679  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.054810  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.054882  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.147569  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 22:44:31.170321  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 22:44:31.257461  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.266924  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.267022  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.370520  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:44:31.382909  661046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:44:31.391520  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:44:31.453313  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:44:31.516119  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:44:31.579296  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:44:31.643958  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:44:31.703980  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:44:31.772148  661046 kubeadm.go:401] StartCluster: {Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:31.772275  661046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 22:44:31.772341  661046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:44:31.832530  661046 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:31.832557  661046 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:31.832562  661046 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:31.832565  661046 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:31.832570  661046 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:31.832574  661046 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:31.832577  661046 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:31.832580  661046 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:31.832583  661046 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:31.832590  661046 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:31.832594  661046 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:31.832597  661046 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:31.832601  661046 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:31.832604  661046 cri.go:89] found id: ""
	I1202 22:44:31.832653  661046 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 22:44:31.850816  661046 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:31Z" level=error msg="open /run/runc: no such file or directory"
	I1202 22:44:31.850899  661046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:44:31.860604  661046 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:44:31.860624  661046 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:44:31.860677  661046 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:44:31.876850  661046 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:44:31.877498  661046 kubeconfig.go:125] found "pause-618835" server: "https://192.168.85.2:8443"
	I1202 22:44:31.878317  661046 kapi.go:59] client config for pause-618835: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:44:31.878820  661046 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 22:44:31.878845  661046 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 22:44:31.878851  661046 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 22:44:31.878856  661046 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 22:44:31.878860  661046 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 22:44:31.879145  661046 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:44:31.899458  661046 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:44:31.899491  661046 kubeadm.go:602] duration metric: took 38.860784ms to restartPrimaryControlPlane
	I1202 22:44:31.899500  661046 kubeadm.go:403] duration metric: took 127.364099ms to StartCluster
	I1202 22:44:31.899517  661046 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:31.899581  661046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:44:31.900441  661046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:31.900682  661046 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 22:44:31.901008  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:31.901066  661046 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:44:31.904906  661046 out.go:179] * Enabled addons: 
	I1202 22:44:31.904973  661046 out.go:179] * Verifying Kubernetes components...
	I1202 22:44:31.907865  661046 addons.go:530] duration metric: took 6.798498ms for enable addons: enabled=[]
	I1202 22:44:31.907952  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:32.213405  661046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:44:32.235526  661046 node_ready.go:35] waiting up to 6m0s for node "pause-618835" to be "Ready" ...
	I1202 22:44:35.909622  661046 node_ready.go:49] node "pause-618835" is "Ready"
	I1202 22:44:35.909697  661046 node_ready.go:38] duration metric: took 3.674124436s for node "pause-618835" to be "Ready" ...
	I1202 22:44:35.909726  661046 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:44:35.909814  661046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:44:35.929353  661046 api_server.go:72] duration metric: took 4.028633544s to wait for apiserver process to appear ...
	I1202 22:44:35.929419  661046 api_server.go:88] waiting for apiserver healthz status ...
	I1202 22:44:35.929461  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:35.952358  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 22:44:35.952443  661046 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 22:44:36.430070  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:36.442739  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 22:44:36.442817  661046 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 22:44:36.930397  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:36.938429  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 22:44:36.940086  661046 api_server.go:141] control plane version: v1.34.2
	I1202 22:44:36.940124  661046 api_server.go:131] duration metric: took 1.010684624s to wait for apiserver health ...
	I1202 22:44:36.940133  661046 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 22:44:36.953714  661046 system_pods.go:59] 7 kube-system pods found
	I1202 22:44:36.953756  661046 system_pods.go:61] "coredns-66bc5c9577-q74fb" [7d073b63-2a81-4541-b874-7d4a252db1eb] Running
	I1202 22:44:36.953767  661046 system_pods.go:61] "etcd-pause-618835" [5f7497f0-8f59-4dbf-bee6-7c7f5cf4e0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:44:36.953772  661046 system_pods.go:61] "kindnet-6zfrp" [05e122c2-9293-4a8e-98a3-5e285bd382ac] Running
	I1202 22:44:36.953780  661046 system_pods.go:61] "kube-apiserver-pause-618835" [1dd916ed-fdd3-426e-a947-9388a4f75333] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:44:36.953785  661046 system_pods.go:61] "kube-controller-manager-pause-618835" [4f039b5f-efb1-41ee-8e37-5164dc4b9dda] Running
	I1202 22:44:36.953789  661046 system_pods.go:61] "kube-proxy-ntbkx" [9329beee-5733-4a02-9057-d0a11df8846c] Running
	I1202 22:44:36.953801  661046 system_pods.go:61] "kube-scheduler-pause-618835" [2c774952-fe5f-471d-bd1f-4464a17be190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:44:36.953813  661046 system_pods.go:74] duration metric: took 13.674774ms to wait for pod list to return data ...
	I1202 22:44:36.953824  661046 default_sa.go:34] waiting for default service account to be created ...
	I1202 22:44:36.979958  661046 default_sa.go:45] found service account: "default"
	I1202 22:44:36.979988  661046 default_sa.go:55] duration metric: took 26.153398ms for default service account to be created ...
	I1202 22:44:36.980007  661046 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 22:44:36.983586  661046 system_pods.go:86] 7 kube-system pods found
	I1202 22:44:36.983626  661046 system_pods.go:89] "coredns-66bc5c9577-q74fb" [7d073b63-2a81-4541-b874-7d4a252db1eb] Running
	I1202 22:44:36.983636  661046 system_pods.go:89] "etcd-pause-618835" [5f7497f0-8f59-4dbf-bee6-7c7f5cf4e0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:44:36.983643  661046 system_pods.go:89] "kindnet-6zfrp" [05e122c2-9293-4a8e-98a3-5e285bd382ac] Running
	I1202 22:44:36.983651  661046 system_pods.go:89] "kube-apiserver-pause-618835" [1dd916ed-fdd3-426e-a947-9388a4f75333] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:44:36.983656  661046 system_pods.go:89] "kube-controller-manager-pause-618835" [4f039b5f-efb1-41ee-8e37-5164dc4b9dda] Running
	I1202 22:44:36.983660  661046 system_pods.go:89] "kube-proxy-ntbkx" [9329beee-5733-4a02-9057-d0a11df8846c] Running
	I1202 22:44:36.983668  661046 system_pods.go:89] "kube-scheduler-pause-618835" [2c774952-fe5f-471d-bd1f-4464a17be190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:44:36.983676  661046 system_pods.go:126] duration metric: took 3.662768ms to wait for k8s-apps to be running ...
	I1202 22:44:36.983695  661046 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 22:44:36.983753  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:36.997015  661046 system_svc.go:56] duration metric: took 13.311979ms WaitForService to wait for kubelet
	I1202 22:44:36.997060  661046 kubeadm.go:587] duration metric: took 5.096345819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:44:36.997082  661046 node_conditions.go:102] verifying NodePressure condition ...
	I1202 22:44:37.004213  661046 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 22:44:37.004260  661046 node_conditions.go:123] node cpu capacity is 2
	I1202 22:44:37.004276  661046 node_conditions.go:105] duration metric: took 7.186342ms to run NodePressure ...
	I1202 22:44:37.004292  661046 start.go:242] waiting for startup goroutines ...
	I1202 22:44:37.004307  661046 start.go:247] waiting for cluster config update ...
	I1202 22:44:37.004317  661046 start.go:256] writing updated cluster config ...
	I1202 22:44:37.004731  661046 ssh_runner.go:195] Run: rm -f paused
	I1202 22:44:37.010102  661046 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:44:37.010946  661046 kapi.go:59] client config for pause-618835: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:44:37.016129  661046 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q74fb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:37.024271  661046 pod_ready.go:94] pod "coredns-66bc5c9577-q74fb" is "Ready"
	I1202 22:44:37.024312  661046 pod_ready.go:86] duration metric: took 8.141921ms for pod "coredns-66bc5c9577-q74fb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:37.027905  661046 pod_ready.go:83] waiting for pod "etcd-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:44:39.034075  661046 pod_ready.go:104] pod "etcd-pause-618835" is not "Ready", error: <nil>
	W1202 22:44:41.533868  661046 pod_ready.go:104] pod "etcd-pause-618835" is not "Ready", error: <nil>
	I1202 22:44:43.034100  661046 pod_ready.go:94] pod "etcd-pause-618835" is "Ready"
	I1202 22:44:43.034126  661046 pod_ready.go:86] duration metric: took 6.006192029s for pod "etcd-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:43.036482  661046 pod_ready.go:83] waiting for pod "kube-apiserver-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:44:45.049393  661046 pod_ready.go:104] pod "kube-apiserver-pause-618835" is not "Ready", error: <nil>
	I1202 22:44:45.541910  661046 pod_ready.go:94] pod "kube-apiserver-pause-618835" is "Ready"
	I1202 22:44:45.541937  661046 pod_ready.go:86] duration metric: took 2.505424103s for pod "kube-apiserver-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.544357  661046 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.548883  661046 pod_ready.go:94] pod "kube-controller-manager-pause-618835" is "Ready"
	I1202 22:44:45.548909  661046 pod_ready.go:86] duration metric: took 4.53038ms for pod "kube-controller-manager-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.551398  661046 pod_ready.go:83] waiting for pod "kube-proxy-ntbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.558313  661046 pod_ready.go:94] pod "kube-proxy-ntbkx" is "Ready"
	I1202 22:44:45.558369  661046 pod_ready.go:86] duration metric: took 6.871073ms for pod "kube-proxy-ntbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.563355  661046 pod_ready.go:83] waiting for pod "kube-scheduler-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.832191  661046 pod_ready.go:94] pod "kube-scheduler-pause-618835" is "Ready"
	I1202 22:44:45.832219  661046 pod_ready.go:86] duration metric: took 268.838109ms for pod "kube-scheduler-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.832233  661046 pod_ready.go:40] duration metric: took 8.822083917s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:44:45.883405  661046 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 22:44:45.886643  661046 out.go:179] * Done! kubectl is now configured to use "pause-618835" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.275873525Z" level=info msg="Starting container: 8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5" id=5b772e18-5560-4a51-bbbf-69b1dd498e8b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.300999367Z" level=info msg="Starting container: ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1" id=f8a578a2-3d7b-42db-96f9-7564eb81cc74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.302974248Z" level=info msg="Started container" PID=2328 containerID=ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1 description=kube-system/coredns-66bc5c9577-q74fb/coredns id=f8a578a2-3d7b-42db-96f9-7564eb81cc74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb4dba1d0c8e7d09ea24fc705f8fdb059d82b449e314d924bbfa968a82dfe891
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.322235781Z" level=info msg="Started container" PID=2318 containerID=8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5 description=kube-system/kube-apiserver-pause-618835/kube-apiserver id=5b772e18-5560-4a51-bbbf-69b1dd498e8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=3263d4c876737fb7e44e5c5b3d3673461f7e40ab79b40370c4320c95c5ee9404
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.350455836Z" level=info msg="Created container ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a: kube-system/kindnet-6zfrp/kindnet-cni" id=a369a2f1-f8d4-43cc-a7e1-e1cc845dba10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.351196882Z" level=info msg="Starting container: ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a" id=34a2b4ca-5240-49da-902c-1f2e7984d5bf name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.353249581Z" level=info msg="Started container" PID=2353 containerID=ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a description=kube-system/kindnet-6zfrp/kindnet-cni id=34a2b4ca-5240-49da-902c-1f2e7984d5bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=cafe528068ddfaf29b6141c08fc1ef2f78405e5a268527bae0efbb7df8b15a6d
	Dec 02 22:44:32 pause-618835 crio[2085]: time="2025-12-02T22:44:32.098779976Z" level=info msg="Created container 9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b: kube-system/kube-proxy-ntbkx/kube-proxy" id=3199437c-c0c5-47fc-b312-fe8a15e6f53e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 22:44:32 pause-618835 crio[2085]: time="2025-12-02T22:44:32.106110139Z" level=info msg="Starting container: 9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b" id=bfeadfe4-ad1c-4db8-8c1f-00066a4cc4f1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:32 pause-618835 crio[2085]: time="2025-12-02T22:44:32.109571485Z" level=info msg="Started container" PID=2341 containerID=9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b description=kube-system/kube-proxy-ntbkx/kube-proxy id=bfeadfe4-ad1c-4db8-8c1f-00066a4cc4f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6600c7284d6bf779717d7b0feabf264604b09ba26e0e23220de7119f289f018
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.734711999Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.738544762Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.738577337Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.738602387Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.74198489Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.742021419Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.742043746Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.746364892Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.746406262Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.746430706Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.749739706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.749774832Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.749797249Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.753060841Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.753102679Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ebc7a7f77724b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   18 seconds ago       Running             kindnet-cni               1                   cafe528068ddf       kindnet-6zfrp                          kube-system
	9b2e7721431c1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   18 seconds ago       Running             kube-proxy                1                   d6600c7284d6b       kube-proxy-ntbkx                       kube-system
	ebad6ba86b6d8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   18 seconds ago       Running             coredns                   1                   fb4dba1d0c8e7       coredns-66bc5c9577-q74fb               kube-system
	8ee6d1594ad32       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   18 seconds ago       Running             kube-apiserver            1                   3263d4c876737       kube-apiserver-pause-618835            kube-system
	8d1b9e360db2b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   18 seconds ago       Running             etcd                      1                   64ca0e96510b7       etcd-pause-618835                      kube-system
	b0691e31e9e7b       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   18 seconds ago       Running             kube-controller-manager   1                   f74512c65b80a       kube-controller-manager-pause-618835   kube-system
	6d242b830fba6       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   18 seconds ago       Running             kube-scheduler            1                   8a1500d4db2ca       kube-scheduler-pause-618835            kube-system
	83cb63477208e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   29 seconds ago       Exited              coredns                   0                   fb4dba1d0c8e7       coredns-66bc5c9577-q74fb               kube-system
	8f5ab902bbd3e       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   d6600c7284d6b       kube-proxy-ntbkx                       kube-system
	5b54b32ca6e4a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   cafe528068ddf       kindnet-6zfrp                          kube-system
	17c10f7e06826       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   3263d4c876737       kube-apiserver-pause-618835            kube-system
	ec88d90be5db8       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   f74512c65b80a       kube-controller-manager-pause-618835   kube-system
	12e8d079d7940       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   64ca0e96510b7       etcd-pause-618835                      kube-system
	ed638ee4ec741       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   8a1500d4db2ca       kube-scheduler-pause-618835            kube-system
	
	
	==> coredns [83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54243 - 61826 "HINFO IN 1715722739181043788.1090242272406961120. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052802843s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32886 - 51048 "HINFO IN 5360015328477056735.6666557341340843959. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051361801s
	
	
	==> describe nodes <==
	Name:               pause-618835
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-618835
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=pause-618835
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T22_43_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 22:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-618835
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 22:44:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:43:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:43:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:43:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-618835
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                3e738705-4c5e-466f-a52e-ac9561bbcbff
	  Boot ID:                    c77b83b8-287c-4d91-bf3a-e2991f41400e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q74fb                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     72s
	  kube-system                 etcd-pause-618835                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         78s
	  kube-system                 kindnet-6zfrp                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-pause-618835             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-618835    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-ntbkx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-618835             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 70s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node pause-618835 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node pause-618835 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s (x8 over 85s)  kubelet          Node pause-618835 status is now: NodeHasSufficientPID
	  Normal   Starting                 78s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 78s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  78s                kubelet          Node pause-618835 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s                kubelet          Node pause-618835 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s                kubelet          Node pause-618835 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           73s                node-controller  Node pause-618835 event: Registered Node pause-618835 in Controller
	  Normal   NodeReady                31s                kubelet          Node pause-618835 status is now: NodeReady
	  Normal   RegisteredNode           11s                node-controller  Node pause-618835 event: Registered Node pause-618835 in Controller
	
	
	==> dmesg <==
	[Dec 2 22:09] overlayfs: idmapped layers are currently not supported
	[  +2.910244] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:10] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:11] overlayfs: idmapped layers are currently not supported
	[ +41.264115] hrtimer: interrupt took 8638023 ns
	[Dec 2 22:12] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:17] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:18] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:21] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:23] overlayfs: idmapped layers are currently not supported
	[ +16.312722] overlayfs: idmapped layers are currently not supported
	[  +9.098621] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:25] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:26] overlayfs: idmapped layers are currently not supported
	[ +25.910639] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:27] kauditd_printk_skb: 8 callbacks suppressed
	[ +17.250662] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:28] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:30] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:32] overlayfs: idmapped layers are currently not supported
	[ +24.664804] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd] <==
	{"level":"warn","ts":"2025-12-02T22:43:28.173782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.187595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.217568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.243194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.255777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.272057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.323092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T22:44:23.350029Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T22:44:23.350119Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-618835","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-02T22:44:23.350219Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T22:44:23.487047Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T22:44:23.487221Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-02T22:44:23.487383Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-02T22:44:23.487444Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-12-02T22:44:23.486989Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487844Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487891Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T22:44:23.487903Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487954Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487995Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T22:44:23.488044Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T22:44:23.490731Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-02T22:44:23.490878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T22:44:23.490957Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-02T22:44:23.491042Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-618835","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f] <==
	{"level":"warn","ts":"2025-12-02T22:44:34.064756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.094726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.109786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.125973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.151780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.179484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.228670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.248288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.268019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.321186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.351636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.376551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.411663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.455078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.487609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.508763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.535076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.581488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.627471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.669266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.678393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.708206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.724802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.738751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.841573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39530","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:44:49 up  4:26,  0 user,  load average: 1.34, 1.37, 1.60
	Linux pause-618835 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e] <==
	I1202 22:43:38.318056       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 22:43:38.318364       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 22:43:38.318508       1 main.go:148] setting mtu 1500 for CNI 
	I1202 22:43:38.318529       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 22:43:38.318540       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T22:43:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 22:43:38.521047       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 22:43:38.521080       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 22:43:38.521091       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 22:43:38.521186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 22:44:08.522016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 22:44:08.522028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 22:44:08.522138       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 22:44:08.611843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1202 22:44:10.221975       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 22:44:10.222004       1 metrics.go:72] Registering metrics
	I1202 22:44:10.222074       1 controller.go:711] "Syncing nftables rules"
	I1202 22:44:18.527263       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 22:44:18.527321       1 main.go:301] handling current node
	
	
	==> kindnet [ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a] <==
	I1202 22:44:31.523789       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 22:44:31.524156       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 22:44:31.524330       1 main.go:148] setting mtu 1500 for CNI 
	I1202 22:44:31.524371       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 22:44:31.524404       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T22:44:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 22:44:31.731897       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 22:44:31.731981       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 22:44:31.732014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 22:44:31.742304       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 22:44:36.051034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 22:44:36.051219       1 metrics.go:72] Registering metrics
	I1202 22:44:36.051318       1 controller.go:711] "Syncing nftables rules"
	I1202 22:44:41.734314       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 22:44:41.734372       1 main.go:301] handling current node
	
	
	==> kube-apiserver [17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f] <==
	W1202 22:44:23.366113       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.366162       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.366789       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.366841       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368333       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368386       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368428       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368469       1 logging.go:55] [core] [Channel #25 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368506       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368544       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368585       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368622       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368686       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368735       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368786       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369486       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369547       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369589       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369630       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369669       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369728       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.370103       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.370154       1 logging.go:55] [core] [Channel #4 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.370193       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.371667       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5] <==
	I1202 22:44:35.975087       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 22:44:35.975877       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 22:44:35.998419       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 22:44:35.998560       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 22:44:35.998895       1 aggregator.go:171] initial CRD sync complete...
	I1202 22:44:35.998946       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 22:44:35.998975       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 22:44:35.999022       1 cache.go:39] Caches are synced for autoregister controller
	I1202 22:44:36.001422       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 22:44:36.043033       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 22:44:36.063122       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 22:44:36.070472       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 22:44:36.070748       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 22:44:36.071233       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 22:44:36.071292       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 22:44:36.079082       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 22:44:36.087420       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 22:44:36.087449       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 22:44:36.096272       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 22:44:36.576865       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 22:44:36.943339       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 22:44:41.872121       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 22:44:41.874579       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 22:44:41.878917       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 22:44:41.905495       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0] <==
	I1202 22:44:38.293474       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 22:44:38.295964       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 22:44:38.297539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 22:44:38.298281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 22:44:38.298568       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 22:44:38.299743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 22:44:38.300872       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 22:44:38.303149       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 22:44:38.304347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 22:44:38.304360       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 22:44:38.306537       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 22:44:38.309794       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 22:44:38.312981       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 22:44:38.315307       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 22:44:38.318016       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 22:44:38.324327       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:44:38.324350       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 22:44:38.324360       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 22:44:38.326452       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:44:38.327996       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 22:44:38.328364       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 22:44:38.328501       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 22:44:38.328773       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 22:44:38.328976       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 22:44:38.340422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b] <==
	I1202 22:43:36.056032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 22:43:36.056079       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 22:43:36.060646       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-618835" podCIDRs=["10.244.0.0/24"]
	I1202 22:43:36.072906       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 22:43:36.082335       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 22:43:36.082501       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 22:43:36.082464       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 22:43:36.082659       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 22:43:36.082760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-618835"
	I1202 22:43:36.082823       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 22:43:36.083085       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 22:43:36.083669       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 22:43:36.084480       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:43:36.084536       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 22:43:36.084576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 22:43:36.084669       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 22:43:36.084966       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 22:43:36.085141       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 22:43:36.085205       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 22:43:36.085722       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 22:43:36.082443       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 22:43:36.091035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 22:43:36.091240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:43:36.092905       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 22:44:21.089914       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65] <==
	I1202 22:43:38.931352       1 server_linux.go:53] "Using iptables proxy"
	I1202 22:43:39.008806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 22:43:39.109552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 22:43:39.109658       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 22:43:39.109747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 22:43:39.128452       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 22:43:39.128501       1 server_linux.go:132] "Using iptables Proxier"
	I1202 22:43:39.132715       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 22:43:39.133100       1 server.go:527] "Version info" version="v1.34.2"
	I1202 22:43:39.133135       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 22:43:39.137467       1 config.go:200] "Starting service config controller"
	I1202 22:43:39.137557       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 22:43:39.137949       1 config.go:106] "Starting endpoint slice config controller"
	I1202 22:43:39.138000       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 22:43:39.138101       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 22:43:39.138132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 22:43:39.141827       1 config.go:309] "Starting node config controller"
	I1202 22:43:39.141936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 22:43:39.141970       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 22:43:39.238110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 22:43:39.238232       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 22:43:39.238116       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b] <==
	I1202 22:44:34.818972       1 server_linux.go:53] "Using iptables proxy"
	I1202 22:44:35.596218       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 22:44:36.097092       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 22:44:36.099573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 22:44:36.099761       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 22:44:36.172152       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 22:44:36.172269       1 server_linux.go:132] "Using iptables Proxier"
	I1202 22:44:36.183315       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 22:44:36.183707       1 server.go:527] "Version info" version="v1.34.2"
	I1202 22:44:36.183895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 22:44:36.185566       1 config.go:200] "Starting service config controller"
	I1202 22:44:36.185625       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 22:44:36.185669       1 config.go:106] "Starting endpoint slice config controller"
	I1202 22:44:36.185698       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 22:44:36.185736       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 22:44:36.185766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 22:44:36.186441       1 config.go:309] "Starting node config controller"
	I1202 22:44:36.189142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 22:44:36.189221       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 22:44:36.286412       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 22:44:36.286515       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 22:44:36.286539       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff] <==
	I1202 22:44:34.565228       1 serving.go:386] Generated self-signed cert in-memory
	I1202 22:44:36.505611       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 22:44:36.505713       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 22:44:36.510541       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 22:44:36.510644       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1202 22:44:36.510704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:36.510743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:36.510782       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 22:44:36.510812       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 22:44:36.510982       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 22:44:36.511136       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 22:44:36.611299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 22:44:36.611441       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1202 22:44:36.611597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862] <==
	E1202 22:43:29.113108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 22:43:29.113207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 22:43:29.113263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 22:43:29.113316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 22:43:29.113367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 22:43:29.113450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 22:43:29.119152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 22:43:29.120024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 22:43:29.936231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 22:43:29.973584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 22:43:30.009469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 22:43:30.028438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 22:43:30.088882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 22:43:30.104466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 22:43:30.222790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 22:43:30.306177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 22:43:30.320079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 22:43:30.423731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1202 22:43:33.684011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:23.345800       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 22:44:23.345912       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 22:44:23.345923       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 22:44:23.345945       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:23.346146       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 22:44:23.346164       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.835505    1320 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-618835\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.895563    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-6zfrp\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="05e122c2-9293-4a8e-98a3-5e285bd382ac" pod="kube-system/kindnet-6zfrp"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.908262    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-q74fb\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="7d073b63-2a81-4541-b874-7d4a252db1eb" pod="kube-system/coredns-66bc5c9577-q74fb"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.918021    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="1435c1a7dede36c2eca1cc73e0abe0d9" pod="kube-system/kube-scheduler-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.924098    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="db8b577fb05deaf5d02da92fa0f0f716" pod="kube-system/etcd-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.925371    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="9e8055dd05b97368ec3993047903f948" pod="kube-system/kube-apiserver-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.957743    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="57b09e366b2d85cc4b90395a127ac73e" pod="kube-system/kube-controller-manager-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.960085    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="57b09e366b2d85cc4b90395a127ac73e" pod="kube-system/kube-controller-manager-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.983225    1320 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         pods "kube-proxy-ntbkx" is forbidden: User "system:node:pause-618835" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-618835' and this object
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Dec 02 22:44:35 pause-618835 kubelet[1320]:  > podUID="9329beee-5733-4a02-9057-d0a11df8846c" pod="kube-system/kube-proxy-ntbkx"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.987776    1320 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         pods "kindnet-6zfrp" is forbidden: User "system:node:pause-618835" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-618835' and this object
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Dec 02 22:44:35 pause-618835 kubelet[1320]:  > podUID="05e122c2-9293-4a8e-98a3-5e285bd382ac" pod="kube-system/kindnet-6zfrp"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.993246    1320 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         pods "coredns-66bc5c9577-q74fb" is forbidden: User "system:node:pause-618835" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-618835' and this object
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Dec 02 22:44:35 pause-618835 kubelet[1320]:  > podUID="7d073b63-2a81-4541-b874-7d4a252db1eb" pod="kube-system/coredns-66bc5c9577-q74fb"
	Dec 02 22:44:42 pause-618835 kubelet[1320]: W1202 22:44:42.125079    1320 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 02 22:44:46 pause-618835 kubelet[1320]: I1202 22:44:46.352892    1320 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 02 22:44:46 pause-618835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 22:44:46 pause-618835 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 22:44:46 pause-618835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-618835 -n pause-618835
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-618835 -n pause-618835: exit status 2 (361.892823ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-618835 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-618835
helpers_test.go:243: (dbg) docker inspect pause-618835:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71",
	        "Created": "2025-12-02T22:43:04.500240363Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 658444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:43:04.536118217Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/hostname",
	        "HostsPath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/hosts",
	        "LogPath": "/var/lib/docker/containers/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71/92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71-json.log",
	        "Name": "/pause-618835",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-618835:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-618835",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92a62975dc25ead86900d25c4acc68672a13f5e6cf41389e2e5453f44182ce71",
	                "LowerDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705-init/diff:/var/lib/docker/overlay2/fcab993f2890bb3806c325812a711d78be32ffe300d8336bf47e24c24d614e6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ac6b89253759857ca9f31fbe80ae6c542625a30b4f5eef895fdb174eefa705/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-618835",
	                "Source": "/var/lib/docker/volumes/pause-618835/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-618835",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-618835",
	                "name.minikube.sigs.k8s.io": "pause-618835",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f12e75d3b702491bdddd35bc9a105023156ef7ca7196bfae2c6f8683b462e21",
	            "SandboxKey": "/var/run/docker/netns/8f12e75d3b70",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-618835": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:a6:84:2b:11:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed72467687ca4d747c2a4bec2bff64ff8e3ce49a32e16d096cf183a20cf3652f",
	                    "EndpointID": "cc85a707800f542edab5fc6d4c58469845cda4b90ec8cd4ee6c7c161b58dab84",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-618835",
	                        "92a62975dc25"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-618835 -n pause-618835
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-618835 -n pause-618835: exit status 2 (442.368291ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-618835 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-618835 logs -n 25: (1.822683446s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-245878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:30 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p missing-upgrade-825984 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-825984    │ jenkins │ v1.35.0 │ 02 Dec 25 22:30 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ delete  │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-245878 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │                     │
	│ stop    │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p NoKubernetes-245878 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-245878 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │                     │
	│ delete  │ -p NoKubernetes-245878                                                                                                                          │ NoKubernetes-245878       │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:31 UTC │
	│ start   │ -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p missing-upgrade-825984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-825984    │ jenkins │ v1.37.0 │ 02 Dec 25 22:31 UTC │ 02 Dec 25 22:32 UTC │
	│ stop    │ -p kubernetes-upgrade-636006                                                                                                                    │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p kubernetes-upgrade-636006 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-636006 │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │                     │
	│ delete  │ -p missing-upgrade-825984                                                                                                                       │ missing-upgrade-825984    │ jenkins │ v1.37.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:32 UTC │
	│ start   │ -p stopped-upgrade-013069 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-013069    │ jenkins │ v1.35.0 │ 02 Dec 25 22:32 UTC │ 02 Dec 25 22:33 UTC │
	│ stop    │ stopped-upgrade-013069 stop                                                                                                                     │ stopped-upgrade-013069    │ jenkins │ v1.35.0 │ 02 Dec 25 22:33 UTC │ 02 Dec 25 22:33 UTC │
	│ start   │ -p stopped-upgrade-013069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-013069    │ jenkins │ v1.37.0 │ 02 Dec 25 22:33 UTC │ 02 Dec 25 22:37 UTC │
	│ delete  │ -p stopped-upgrade-013069                                                                                                                       │ stopped-upgrade-013069    │ jenkins │ v1.37.0 │ 02 Dec 25 22:37 UTC │ 02 Dec 25 22:37 UTC │
	│ start   │ -p running-upgrade-873899 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-873899    │ jenkins │ v1.35.0 │ 02 Dec 25 22:37 UTC │ 02 Dec 25 22:38 UTC │
	│ start   │ -p running-upgrade-873899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-873899    │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:42 UTC │
	│ delete  │ -p running-upgrade-873899                                                                                                                       │ running-upgrade-873899    │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ start   │ -p pause-618835 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:44 UTC │
	│ start   │ -p pause-618835 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:44 UTC │ 02 Dec 25 22:44 UTC │
	│ pause   │ -p pause-618835 --alsologtostderr -v=5                                                                                                          │ pause-618835              │ jenkins │ v1.37.0 │ 02 Dec 25 22:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:44:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:44:21.772077  661046 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:44:21.772265  661046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:21.772298  661046 out.go:374] Setting ErrFile to fd 2...
	I1202 22:44:21.772315  661046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:44:21.772599  661046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:44:21.772984  661046 out.go:368] Setting JSON to false
	I1202 22:44:21.774176  661046 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15990,"bootTime":1764699472,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 22:44:21.774280  661046 start.go:143] virtualization:  
	I1202 22:44:21.777389  661046 out.go:179] * [pause-618835] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:44:21.781217  661046 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:44:21.781370  661046 notify.go:221] Checking for updates...
	I1202 22:44:21.787658  661046 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:44:21.790449  661046 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:44:21.793367  661046 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 22:44:21.796242  661046 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:44:21.799091  661046 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:44:21.802432  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:21.803069  661046 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:44:21.853224  661046 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:44:21.853415  661046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:21.912680  661046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:21.903202911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:21.912785  661046 docker.go:319] overlay module found
	I1202 22:44:21.915810  661046 out.go:179] * Using the docker driver based on existing profile
	I1202 22:44:21.918579  661046 start.go:309] selected driver: docker
	I1202 22:44:21.918598  661046 start.go:927] validating driver "docker" against &{Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:21.918734  661046 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:44:21.918838  661046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:44:21.986340  661046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:44:21.976842771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:44:21.986742  661046 cni.go:84] Creating CNI manager for ""
	I1202 22:44:21.986812  661046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:44:21.986865  661046 start.go:353] cluster config:
	{Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:21.991739  661046 out.go:179] * Starting "pause-618835" primary control-plane node in "pause-618835" cluster
	I1202 22:44:21.994531  661046 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 22:44:21.997589  661046 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:44:22.000564  661046 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 22:44:22.000717  661046 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 22:44:22.000743  661046 cache.go:65] Caching tarball of preloaded images
	I1202 22:44:22.000656  661046 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:44:22.001213  661046 preload.go:238] Found /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 22:44:22.001266  661046 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 22:44:22.001536  661046 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/config.json ...
	I1202 22:44:22.024507  661046 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:44:22.024534  661046 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:44:22.024549  661046 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:44:22.024584  661046 start.go:360] acquireMachinesLock for pause-618835: {Name:mke18653c2307ed5537ca2391ee1b331ce530ab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:44:22.024646  661046 start.go:364] duration metric: took 38.532µs to acquireMachinesLock for "pause-618835"
	I1202 22:44:22.024671  661046 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:44:22.024676  661046 fix.go:54] fixHost starting: 
	I1202 22:44:22.024950  661046 cli_runner.go:164] Run: docker container inspect pause-618835 --format={{.State.Status}}
	I1202 22:44:22.043037  661046 fix.go:112] recreateIfNeeded on pause-618835: state=Running err=<nil>
	W1202 22:44:22.043071  661046 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:44:22.046253  661046 out.go:252] * Updating the running docker "pause-618835" container ...
	I1202 22:44:22.046306  661046 machine.go:94] provisionDockerMachine start ...
	I1202 22:44:22.046465  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.064267  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.064602  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.064627  661046 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:44:22.214410  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-618835
	
	I1202 22:44:22.214484  661046 ubuntu.go:182] provisioning hostname "pause-618835"
	I1202 22:44:22.214603  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.236613  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.236939  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.236955  661046 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-618835 && echo "pause-618835" | sudo tee /etc/hostname
	I1202 22:44:22.400346  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-618835
	
	I1202 22:44:22.400434  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:22.429430  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:22.429764  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:22.429797  661046 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-618835' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-618835/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-618835' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:44:22.579235  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:44:22.579261  661046 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-444114/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-444114/.minikube}
	I1202 22:44:22.579284  661046 ubuntu.go:190] setting up certificates
	I1202 22:44:22.579293  661046 provision.go:84] configureAuth start
	I1202 22:44:22.579352  661046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-618835
	I1202 22:44:22.596685  661046 provision.go:143] copyHostCerts
	I1202 22:44:22.596760  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem, removing ...
	I1202 22:44:22.596778  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem
	I1202 22:44:22.596853  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/ca.pem (1078 bytes)
	I1202 22:44:22.596973  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem, removing ...
	I1202 22:44:22.596983  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem
	I1202 22:44:22.597014  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/cert.pem (1123 bytes)
	I1202 22:44:22.597119  661046 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem, removing ...
	I1202 22:44:22.597130  661046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem
	I1202 22:44:22.597155  661046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-444114/.minikube/key.pem (1675 bytes)
	I1202 22:44:22.597213  661046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem org=jenkins.pause-618835 san=[127.0.0.1 192.168.85.2 localhost minikube pause-618835]
	I1202 22:44:22.983637  661046 provision.go:177] copyRemoteCerts
	I1202 22:44:22.983707  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:44:22.983761  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:23.001895  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:23.106664  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 22:44:23.123704  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 22:44:23.141408  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 22:44:23.158215  661046 provision.go:87] duration metric: took 578.901326ms to configureAuth
	I1202 22:44:23.158243  661046 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:44:23.158477  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:23.158589  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:23.176100  661046 main.go:143] libmachine: Using SSH client type: native
	I1202 22:44:23.176429  661046 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1202 22:44:23.176448  661046 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 22:44:28.569900  661046 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 22:44:28.569928  661046 machine.go:97] duration metric: took 6.523605112s to provisionDockerMachine
	I1202 22:44:28.569941  661046 start.go:293] postStartSetup for "pause-618835" (driver="docker")
	I1202 22:44:28.569952  661046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:44:28.570028  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:44:28.570073  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.587950  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.695041  661046 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:44:28.698567  661046 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:44:28.698595  661046 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:44:28.698607  661046 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/addons for local assets ...
	I1202 22:44:28.698664  661046 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-444114/.minikube/files for local assets ...
	I1202 22:44:28.698757  661046 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem -> 4472112.pem in /etc/ssl/certs
	I1202 22:44:28.698862  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:44:28.706619  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:44:28.724907  661046 start.go:296] duration metric: took 154.950883ms for postStartSetup
	I1202 22:44:28.725007  661046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:44:28.725050  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.742944  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.844317  661046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:44:28.849218  661046 fix.go:56] duration metric: took 6.824535089s for fixHost
	I1202 22:44:28.849245  661046 start.go:83] releasing machines lock for "pause-618835", held for 6.824586601s
	I1202 22:44:28.849316  661046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-618835
	I1202 22:44:28.865850  661046 ssh_runner.go:195] Run: cat /version.json
	I1202 22:44:28.865915  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.866162  661046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:44:28.866214  661046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-618835
	I1202 22:44:28.884723  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.892719  661046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/pause-618835/id_rsa Username:docker}
	I1202 22:44:28.990565  661046 ssh_runner.go:195] Run: systemctl --version
	I1202 22:44:29.095942  661046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 22:44:29.136291  661046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:44:29.140549  661046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:44:29.140626  661046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:44:29.149352  661046 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:44:29.149375  661046 start.go:496] detecting cgroup driver to use...
	I1202 22:44:29.149405  661046 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:44:29.149458  661046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 22:44:29.164433  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 22:44:29.177340  661046 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:44:29.177454  661046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:44:29.193150  661046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:44:29.205859  661046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:44:29.344604  661046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:44:29.473058  661046 docker.go:234] disabling docker service ...
	I1202 22:44:29.473200  661046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:44:29.488559  661046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:44:29.502024  661046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:44:29.637356  661046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:44:29.800358  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:44:29.814789  661046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:44:29.829527  661046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 22:44:29.829606  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.838693  661046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 22:44:29.838809  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.848300  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.857902  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.867485  661046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:44:29.876198  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.886566  661046 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.897113  661046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 22:44:29.906342  661046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:44:29.914235  661046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:44:29.921742  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:30.054347  661046 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 22:44:30.263291  661046 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 22:44:30.263379  661046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 22:44:30.267266  661046 start.go:564] Will wait 60s for crictl version
	I1202 22:44:30.267376  661046 ssh_runner.go:195] Run: which crictl
	I1202 22:44:30.270908  661046 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:44:30.295521  661046 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 22:44:30.295660  661046 ssh_runner.go:195] Run: crio --version
	I1202 22:44:30.328562  661046 ssh_runner.go:195] Run: crio --version
	I1202 22:44:30.364952  661046 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 22:44:30.367831  661046 cli_runner.go:164] Run: docker network inspect pause-618835 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:44:30.383864  661046 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:44:30.387835  661046 kubeadm.go:884] updating cluster {Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:44:30.387986  661046 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 22:44:30.388044  661046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:44:30.427855  661046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 22:44:30.427881  661046 crio.go:433] Images already preloaded, skipping extraction
	I1202 22:44:30.427941  661046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:44:30.460051  661046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 22:44:30.460076  661046 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:44:30.460085  661046 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 22:44:30.460195  661046 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-618835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:44:30.460284  661046 ssh_runner.go:195] Run: crio config
	I1202 22:44:30.522967  661046 cni.go:84] Creating CNI manager for ""
	I1202 22:44:30.522993  661046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 22:44:30.523027  661046 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:44:30.523050  661046 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-618835 NodeName:pause-618835 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:44:30.523182  661046 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-618835"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:44:30.523263  661046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 22:44:30.530844  661046 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:44:30.530942  661046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:44:30.538341  661046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 22:44:30.551648  661046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 22:44:30.564596  661046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 22:44:30.577069  661046 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:44:30.580727  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:30.703997  661046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:44:30.716780  661046 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835 for IP: 192.168.85.2
	I1202 22:44:30.716853  661046 certs.go:195] generating shared ca certs ...
	I1202 22:44:30.716880  661046 certs.go:227] acquiring lock for ca certs: {Name:mke35a9da4efb4139744fcabb1c5055e9e1c59f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:30.717060  661046 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key
	I1202 22:44:30.717130  661046 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key
	I1202 22:44:30.717172  661046 certs.go:257] generating profile certs ...
	I1202 22:44:30.717299  661046 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key
	I1202 22:44:30.717406  661046 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.key.1773daca
	I1202 22:44:30.717507  661046 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.key
	I1202 22:44:30.717663  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem (1338 bytes)
	W1202 22:44:30.717726  661046 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211_empty.pem, impossibly tiny 0 bytes
	I1202 22:44:30.717766  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:44:30.717819  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/ca.pem (1078 bytes)
	I1202 22:44:30.717877  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:44:30.717924  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/certs/key.pem (1675 bytes)
	I1202 22:44:30.718011  661046 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem (1708 bytes)
	I1202 22:44:30.718867  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:44:30.740148  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:44:30.759607  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:44:30.777756  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 22:44:30.795266  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 22:44:30.812606  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:44:30.829845  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:44:30.847785  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 22:44:30.865490  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/certs/447211.pem --> /usr/share/ca-certificates/447211.pem (1338 bytes)
	I1202 22:44:30.882582  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/ssl/certs/4472112.pem --> /usr/share/ca-certificates/4472112.pem (1708 bytes)
	I1202 22:44:30.900054  661046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:44:30.917424  661046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:44:30.930080  661046 ssh_runner.go:195] Run: openssl version
	I1202 22:44:30.936710  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:44:30.944864  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:30.949450  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:30.949515  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:44:31.010598  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:44:31.029090  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/447211.pem && ln -fs /usr/share/ca-certificates/447211.pem /etc/ssl/certs/447211.pem"
	I1202 22:44:31.049679  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.054810  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:29 /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.054882  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/447211.pem
	I1202 22:44:31.147569  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/447211.pem /etc/ssl/certs/51391683.0"
	I1202 22:44:31.170321  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4472112.pem && ln -fs /usr/share/ca-certificates/4472112.pem /etc/ssl/certs/4472112.pem"
	I1202 22:44:31.257461  661046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.266924  661046 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:29 /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.267022  661046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4472112.pem
	I1202 22:44:31.370520  661046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4472112.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:44:31.382909  661046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:44:31.391520  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:44:31.453313  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:44:31.516119  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:44:31.579296  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:44:31.643958  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:44:31.703980  661046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:44:31.772148  661046 kubeadm.go:401] StartCluster: {Name:pause-618835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-618835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:44:31.772275  661046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 22:44:31.772341  661046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:44:31.832530  661046 cri.go:89] found id: "ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a"
	I1202 22:44:31.832557  661046 cri.go:89] found id: "ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1"
	I1202 22:44:31.832562  661046 cri.go:89] found id: "8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5"
	I1202 22:44:31.832565  661046 cri.go:89] found id: "8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f"
	I1202 22:44:31.832570  661046 cri.go:89] found id: "b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0"
	I1202 22:44:31.832574  661046 cri.go:89] found id: "6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff"
	I1202 22:44:31.832577  661046 cri.go:89] found id: "83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962"
	I1202 22:44:31.832580  661046 cri.go:89] found id: "8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65"
	I1202 22:44:31.832583  661046 cri.go:89] found id: "5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e"
	I1202 22:44:31.832590  661046 cri.go:89] found id: "17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f"
	I1202 22:44:31.832594  661046 cri.go:89] found id: "ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b"
	I1202 22:44:31.832597  661046 cri.go:89] found id: "12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd"
	I1202 22:44:31.832601  661046 cri.go:89] found id: "ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862"
	I1202 22:44:31.832604  661046 cri.go:89] found id: ""
	I1202 22:44:31.832653  661046 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 22:44:31.850816  661046 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:44:31Z" level=error msg="open /run/runc: no such file or directory"
	I1202 22:44:31.850899  661046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:44:31.860604  661046 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:44:31.860624  661046 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:44:31.860677  661046 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:44:31.876850  661046 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:44:31.877498  661046 kubeconfig.go:125] found "pause-618835" server: "https://192.168.85.2:8443"
	I1202 22:44:31.878317  661046 kapi.go:59] client config for pause-618835: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:44:31.878820  661046 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 22:44:31.878845  661046 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 22:44:31.878851  661046 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 22:44:31.878856  661046 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 22:44:31.878860  661046 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 22:44:31.879145  661046 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:44:31.899458  661046 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:44:31.899491  661046 kubeadm.go:602] duration metric: took 38.860784ms to restartPrimaryControlPlane
	I1202 22:44:31.899500  661046 kubeadm.go:403] duration metric: took 127.364099ms to StartCluster
	I1202 22:44:31.899517  661046 settings.go:142] acquiring lock: {Name:mk13bf6d22db0bc9643d971f7fee36732f4b60e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:31.899581  661046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 22:44:31.900441  661046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-444114/kubeconfig: {Name:mk09d6cdb1f6dfbf3cbb4e269030390d9fcef42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:44:31.900682  661046 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 22:44:31.901008  661046 config.go:182] Loaded profile config "pause-618835": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:44:31.901066  661046 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:44:31.904906  661046 out.go:179] * Enabled addons: 
	I1202 22:44:31.904973  661046 out.go:179] * Verifying Kubernetes components...
	I1202 22:44:31.907865  661046 addons.go:530] duration metric: took 6.798498ms for enable addons: enabled=[]
	I1202 22:44:31.907952  661046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:44:32.213405  661046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:44:32.235526  661046 node_ready.go:35] waiting up to 6m0s for node "pause-618835" to be "Ready" ...
	I1202 22:44:35.909622  661046 node_ready.go:49] node "pause-618835" is "Ready"
	I1202 22:44:35.909697  661046 node_ready.go:38] duration metric: took 3.674124436s for node "pause-618835" to be "Ready" ...
	I1202 22:44:35.909726  661046 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:44:35.909814  661046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:44:35.929353  661046 api_server.go:72] duration metric: took 4.028633544s to wait for apiserver process to appear ...
	I1202 22:44:35.929419  661046 api_server.go:88] waiting for apiserver healthz status ...
	I1202 22:44:35.929461  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:35.952358  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 22:44:35.952443  661046 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 22:44:36.430070  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:36.442739  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 22:44:36.442817  661046 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 22:44:36.930397  661046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:44:36.938429  661046 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 22:44:36.940086  661046 api_server.go:141] control plane version: v1.34.2
	I1202 22:44:36.940124  661046 api_server.go:131] duration metric: took 1.010684624s to wait for apiserver health ...
	I1202 22:44:36.940133  661046 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 22:44:36.953714  661046 system_pods.go:59] 7 kube-system pods found
	I1202 22:44:36.953756  661046 system_pods.go:61] "coredns-66bc5c9577-q74fb" [7d073b63-2a81-4541-b874-7d4a252db1eb] Running
	I1202 22:44:36.953767  661046 system_pods.go:61] "etcd-pause-618835" [5f7497f0-8f59-4dbf-bee6-7c7f5cf4e0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:44:36.953772  661046 system_pods.go:61] "kindnet-6zfrp" [05e122c2-9293-4a8e-98a3-5e285bd382ac] Running
	I1202 22:44:36.953780  661046 system_pods.go:61] "kube-apiserver-pause-618835" [1dd916ed-fdd3-426e-a947-9388a4f75333] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:44:36.953785  661046 system_pods.go:61] "kube-controller-manager-pause-618835" [4f039b5f-efb1-41ee-8e37-5164dc4b9dda] Running
	I1202 22:44:36.953789  661046 system_pods.go:61] "kube-proxy-ntbkx" [9329beee-5733-4a02-9057-d0a11df8846c] Running
	I1202 22:44:36.953801  661046 system_pods.go:61] "kube-scheduler-pause-618835" [2c774952-fe5f-471d-bd1f-4464a17be190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:44:36.953813  661046 system_pods.go:74] duration metric: took 13.674774ms to wait for pod list to return data ...
	I1202 22:44:36.953824  661046 default_sa.go:34] waiting for default service account to be created ...
	I1202 22:44:36.979958  661046 default_sa.go:45] found service account: "default"
	I1202 22:44:36.979988  661046 default_sa.go:55] duration metric: took 26.153398ms for default service account to be created ...
	I1202 22:44:36.980007  661046 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 22:44:36.983586  661046 system_pods.go:86] 7 kube-system pods found
	I1202 22:44:36.983626  661046 system_pods.go:89] "coredns-66bc5c9577-q74fb" [7d073b63-2a81-4541-b874-7d4a252db1eb] Running
	I1202 22:44:36.983636  661046 system_pods.go:89] "etcd-pause-618835" [5f7497f0-8f59-4dbf-bee6-7c7f5cf4e0c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:44:36.983643  661046 system_pods.go:89] "kindnet-6zfrp" [05e122c2-9293-4a8e-98a3-5e285bd382ac] Running
	I1202 22:44:36.983651  661046 system_pods.go:89] "kube-apiserver-pause-618835" [1dd916ed-fdd3-426e-a947-9388a4f75333] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:44:36.983656  661046 system_pods.go:89] "kube-controller-manager-pause-618835" [4f039b5f-efb1-41ee-8e37-5164dc4b9dda] Running
	I1202 22:44:36.983660  661046 system_pods.go:89] "kube-proxy-ntbkx" [9329beee-5733-4a02-9057-d0a11df8846c] Running
	I1202 22:44:36.983668  661046 system_pods.go:89] "kube-scheduler-pause-618835" [2c774952-fe5f-471d-bd1f-4464a17be190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:44:36.983676  661046 system_pods.go:126] duration metric: took 3.662768ms to wait for k8s-apps to be running ...
	I1202 22:44:36.983695  661046 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 22:44:36.983753  661046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:44:36.997015  661046 system_svc.go:56] duration metric: took 13.311979ms WaitForService to wait for kubelet
	I1202 22:44:36.997060  661046 kubeadm.go:587] duration metric: took 5.096345819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:44:36.997082  661046 node_conditions.go:102] verifying NodePressure condition ...
	I1202 22:44:37.004213  661046 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 22:44:37.004260  661046 node_conditions.go:123] node cpu capacity is 2
	I1202 22:44:37.004276  661046 node_conditions.go:105] duration metric: took 7.186342ms to run NodePressure ...
	I1202 22:44:37.004292  661046 start.go:242] waiting for startup goroutines ...
	I1202 22:44:37.004307  661046 start.go:247] waiting for cluster config update ...
	I1202 22:44:37.004317  661046 start.go:256] writing updated cluster config ...
	I1202 22:44:37.004731  661046 ssh_runner.go:195] Run: rm -f paused
	I1202 22:44:37.010102  661046 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:44:37.010946  661046 kapi.go:59] client config for pause-618835: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/profiles/pause-618835/client.key", CAFile:"/home/jenkins/minikube-integration/21997-444114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:44:37.016129  661046 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q74fb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:37.024271  661046 pod_ready.go:94] pod "coredns-66bc5c9577-q74fb" is "Ready"
	I1202 22:44:37.024312  661046 pod_ready.go:86] duration metric: took 8.141921ms for pod "coredns-66bc5c9577-q74fb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:37.027905  661046 pod_ready.go:83] waiting for pod "etcd-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:44:39.034075  661046 pod_ready.go:104] pod "etcd-pause-618835" is not "Ready", error: <nil>
	W1202 22:44:41.533868  661046 pod_ready.go:104] pod "etcd-pause-618835" is not "Ready", error: <nil>
	I1202 22:44:43.034100  661046 pod_ready.go:94] pod "etcd-pause-618835" is "Ready"
	I1202 22:44:43.034126  661046 pod_ready.go:86] duration metric: took 6.006192029s for pod "etcd-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:43.036482  661046 pod_ready.go:83] waiting for pod "kube-apiserver-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:44:45.049393  661046 pod_ready.go:104] pod "kube-apiserver-pause-618835" is not "Ready", error: <nil>
	I1202 22:44:45.541910  661046 pod_ready.go:94] pod "kube-apiserver-pause-618835" is "Ready"
	I1202 22:44:45.541937  661046 pod_ready.go:86] duration metric: took 2.505424103s for pod "kube-apiserver-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.544357  661046 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.548883  661046 pod_ready.go:94] pod "kube-controller-manager-pause-618835" is "Ready"
	I1202 22:44:45.548909  661046 pod_ready.go:86] duration metric: took 4.53038ms for pod "kube-controller-manager-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.551398  661046 pod_ready.go:83] waiting for pod "kube-proxy-ntbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.558313  661046 pod_ready.go:94] pod "kube-proxy-ntbkx" is "Ready"
	I1202 22:44:45.558369  661046 pod_ready.go:86] duration metric: took 6.871073ms for pod "kube-proxy-ntbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.563355  661046 pod_ready.go:83] waiting for pod "kube-scheduler-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.832191  661046 pod_ready.go:94] pod "kube-scheduler-pause-618835" is "Ready"
	I1202 22:44:45.832219  661046 pod_ready.go:86] duration metric: took 268.838109ms for pod "kube-scheduler-pause-618835" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:44:45.832233  661046 pod_ready.go:40] duration metric: took 8.822083917s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:44:45.883405  661046 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 22:44:45.886643  661046 out.go:179] * Done! kubectl is now configured to use "pause-618835" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.275873525Z" level=info msg="Starting container: 8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5" id=5b772e18-5560-4a51-bbbf-69b1dd498e8b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.300999367Z" level=info msg="Starting container: ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1" id=f8a578a2-3d7b-42db-96f9-7564eb81cc74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.302974248Z" level=info msg="Started container" PID=2328 containerID=ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1 description=kube-system/coredns-66bc5c9577-q74fb/coredns id=f8a578a2-3d7b-42db-96f9-7564eb81cc74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb4dba1d0c8e7d09ea24fc705f8fdb059d82b449e314d924bbfa968a82dfe891
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.322235781Z" level=info msg="Started container" PID=2318 containerID=8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5 description=kube-system/kube-apiserver-pause-618835/kube-apiserver id=5b772e18-5560-4a51-bbbf-69b1dd498e8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=3263d4c876737fb7e44e5c5b3d3673461f7e40ab79b40370c4320c95c5ee9404
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.350455836Z" level=info msg="Created container ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a: kube-system/kindnet-6zfrp/kindnet-cni" id=a369a2f1-f8d4-43cc-a7e1-e1cc845dba10 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.351196882Z" level=info msg="Starting container: ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a" id=34a2b4ca-5240-49da-902c-1f2e7984d5bf name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:31 pause-618835 crio[2085]: time="2025-12-02T22:44:31.353249581Z" level=info msg="Started container" PID=2353 containerID=ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a description=kube-system/kindnet-6zfrp/kindnet-cni id=34a2b4ca-5240-49da-902c-1f2e7984d5bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=cafe528068ddfaf29b6141c08fc1ef2f78405e5a268527bae0efbb7df8b15a6d
	Dec 02 22:44:32 pause-618835 crio[2085]: time="2025-12-02T22:44:32.098779976Z" level=info msg="Created container 9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b: kube-system/kube-proxy-ntbkx/kube-proxy" id=3199437c-c0c5-47fc-b312-fe8a15e6f53e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 22:44:32 pause-618835 crio[2085]: time="2025-12-02T22:44:32.106110139Z" level=info msg="Starting container: 9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b" id=bfeadfe4-ad1c-4db8-8c1f-00066a4cc4f1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 22:44:32 pause-618835 crio[2085]: time="2025-12-02T22:44:32.109571485Z" level=info msg="Started container" PID=2341 containerID=9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b description=kube-system/kube-proxy-ntbkx/kube-proxy id=bfeadfe4-ad1c-4db8-8c1f-00066a4cc4f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6600c7284d6bf779717d7b0feabf264604b09ba26e0e23220de7119f289f018
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.734711999Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.738544762Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.738577337Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.738602387Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.74198489Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.742021419Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.742043746Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.746364892Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.746406262Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.746430706Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.749739706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.749774832Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.749797249Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.753060841Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 22:44:41 pause-618835 crio[2085]: time="2025-12-02T22:44:41.753102679Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ebc7a7f77724b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   cafe528068ddf       kindnet-6zfrp                          kube-system
	9b2e7721431c1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   20 seconds ago       Running             kube-proxy                1                   d6600c7284d6b       kube-proxy-ntbkx                       kube-system
	ebad6ba86b6d8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   fb4dba1d0c8e7       coredns-66bc5c9577-q74fb               kube-system
	8ee6d1594ad32       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   20 seconds ago       Running             kube-apiserver            1                   3263d4c876737       kube-apiserver-pause-618835            kube-system
	8d1b9e360db2b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   20 seconds ago       Running             etcd                      1                   64ca0e96510b7       etcd-pause-618835                      kube-system
	b0691e31e9e7b       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   20 seconds ago       Running             kube-controller-manager   1                   f74512c65b80a       kube-controller-manager-pause-618835   kube-system
	6d242b830fba6       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   20 seconds ago       Running             kube-scheduler            1                   8a1500d4db2ca       kube-scheduler-pause-618835            kube-system
	83cb63477208e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   fb4dba1d0c8e7       coredns-66bc5c9577-q74fb               kube-system
	8f5ab902bbd3e       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   d6600c7284d6b       kube-proxy-ntbkx                       kube-system
	5b54b32ca6e4a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   cafe528068ddf       kindnet-6zfrp                          kube-system
	17c10f7e06826       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   3263d4c876737       kube-apiserver-pause-618835            kube-system
	ec88d90be5db8       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   f74512c65b80a       kube-controller-manager-pause-618835   kube-system
	12e8d079d7940       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   64ca0e96510b7       etcd-pause-618835                      kube-system
	ed638ee4ec741       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   8a1500d4db2ca       kube-scheduler-pause-618835            kube-system
	
	
	==> coredns [83cb63477208e0ae5cdd3f4c3cc7c8b2ea8cead2e12ec9d91e89e68bf4d05962] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54243 - 61826 "HINFO IN 1715722739181043788.1090242272406961120. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052802843s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebad6ba86b6d89cef5315d8904daf8980c4a927bfa69054f09b08eec7813c2f1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32886 - 51048 "HINFO IN 5360015328477056735.6666557341340843959. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051361801s
	
	
	==> describe nodes <==
	Name:               pause-618835
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-618835
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=pause-618835
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T22_43_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 22:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-618835
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 22:44:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:43:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:43:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:43:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 22:44:18 +0000   Tue, 02 Dec 2025 22:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-618835
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                3e738705-4c5e-466f-a52e-ac9561bbcbff
	  Boot ID:                    c77b83b8-287c-4d91-bf3a-e2991f41400e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q74fb                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-618835                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-6zfrp                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-618835             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-618835    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-ntbkx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-618835             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-618835 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-618835 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-618835 status is now: NodeHasSufficientPID
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-618835 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-618835 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-618835 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s                node-controller  Node pause-618835 event: Registered Node pause-618835 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-618835 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-618835 event: Registered Node pause-618835 in Controller
	
	
	==> dmesg <==
	[Dec 2 22:09] overlayfs: idmapped layers are currently not supported
	[  +2.910244] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:10] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:11] overlayfs: idmapped layers are currently not supported
	[ +41.264115] hrtimer: interrupt took 8638023 ns
	[Dec 2 22:12] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:17] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:18] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:21] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:23] overlayfs: idmapped layers are currently not supported
	[ +16.312722] overlayfs: idmapped layers are currently not supported
	[  +9.098621] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:25] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:26] overlayfs: idmapped layers are currently not supported
	[ +25.910639] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:27] kauditd_printk_skb: 8 callbacks suppressed
	[ +17.250662] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:28] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:30] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:32] overlayfs: idmapped layers are currently not supported
	[ +24.664804] overlayfs: idmapped layers are currently not supported
	[Dec 2 22:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [12e8d079d794087289e5f8d232e21591b54584367b075f1cc0afbff6ac32c8fd] <==
	{"level":"warn","ts":"2025-12-02T22:43:28.173782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.187595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.217568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.243194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.255777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.272057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:43:28.323092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T22:44:23.350029Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T22:44:23.350119Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-618835","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-02T22:44:23.350219Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T22:44:23.487047Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T22:44:23.487221Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-02T22:44:23.487383Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-02T22:44:23.487444Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-12-02T22:44:23.486989Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487844Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487891Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T22:44:23.487903Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487954Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T22:44:23.487995Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T22:44:23.488044Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T22:44:23.490731Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-02T22:44:23.490878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T22:44:23.490957Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-02T22:44:23.491042Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-618835","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [8d1b9e360db2bd4c4dcc348befeacfa86c515a07431013f5b3456ae9b1731b9f] <==
	{"level":"warn","ts":"2025-12-02T22:44:34.064756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.094726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.109786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.125973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.151780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.179484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.228670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.248288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.268019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.321186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.351636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.376551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.411663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.455078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.487609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.508763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.535076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.581488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.627471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.669266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.678393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.708206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.724802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.738751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T22:44:34.841573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39530","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:44:52 up  4:27,  0 user,  load average: 1.55, 1.42, 1.61
	Linux pause-618835 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b54b32ca6e4a2ce3f09c462a668e455f5b9fa77d78d867e607a88236ae9950e] <==
	I1202 22:43:38.318056       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 22:43:38.318364       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 22:43:38.318508       1 main.go:148] setting mtu 1500 for CNI 
	I1202 22:43:38.318529       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 22:43:38.318540       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T22:43:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 22:43:38.521047       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 22:43:38.521080       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 22:43:38.521091       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 22:43:38.521186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 22:44:08.522016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 22:44:08.522028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 22:44:08.522138       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 22:44:08.611843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1202 22:44:10.221975       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 22:44:10.222004       1 metrics.go:72] Registering metrics
	I1202 22:44:10.222074       1 controller.go:711] "Syncing nftables rules"
	I1202 22:44:18.527263       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 22:44:18.527321       1 main.go:301] handling current node
	
	
	==> kindnet [ebc7a7f77724bba35a7187bca2bd68c10c7f5ee86954c09f417b25359e236a4a] <==
	I1202 22:44:31.523789       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 22:44:31.524156       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 22:44:31.524330       1 main.go:148] setting mtu 1500 for CNI 
	I1202 22:44:31.524371       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 22:44:31.524404       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T22:44:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 22:44:31.731897       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 22:44:31.731981       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 22:44:31.732014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 22:44:31.742304       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 22:44:36.051034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 22:44:36.051219       1 metrics.go:72] Registering metrics
	I1202 22:44:36.051318       1 controller.go:711] "Syncing nftables rules"
	I1202 22:44:41.734314       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 22:44:41.734372       1 main.go:301] handling current node
	I1202 22:44:51.734249       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 22:44:51.734301       1 main.go:301] handling current node
	
	
	==> kube-apiserver [17c10f7e06826a1faf7afc025d8f41abadbca76a0c39a27a0c5daf08f96af96f] <==
	W1202 22:44:23.366113       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.366162       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.366789       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.366841       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368333       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368386       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368428       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368469       1 logging.go:55] [core] [Channel #25 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368506       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368544       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368585       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368622       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368686       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368735       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.368786       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369486       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369547       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369589       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369630       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369669       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.369728       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.370103       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.370154       1 logging.go:55] [core] [Channel #4 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.370193       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 22:44:23.371667       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8ee6d1594ad321036942b361e7dd8a53fcaadf60d11e12bd036adb3cfbb34cd5] <==
	I1202 22:44:35.975087       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 22:44:35.975877       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 22:44:35.998419       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 22:44:35.998560       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 22:44:35.998895       1 aggregator.go:171] initial CRD sync complete...
	I1202 22:44:35.998946       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 22:44:35.998975       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 22:44:35.999022       1 cache.go:39] Caches are synced for autoregister controller
	I1202 22:44:36.001422       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 22:44:36.043033       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 22:44:36.063122       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 22:44:36.070472       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 22:44:36.070748       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 22:44:36.071233       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 22:44:36.071292       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 22:44:36.079082       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 22:44:36.087420       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 22:44:36.087449       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 22:44:36.096272       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 22:44:36.576865       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 22:44:36.943339       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 22:44:41.872121       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 22:44:41.874579       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 22:44:41.878917       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 22:44:41.905495       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b0691e31e9e7baf1834190aa3c7deea3b0534664b4a97abef4dfaa3ddc6594d0] <==
	I1202 22:44:38.293474       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 22:44:38.295964       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 22:44:38.297539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 22:44:38.298281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 22:44:38.298568       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 22:44:38.299743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 22:44:38.300872       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 22:44:38.303149       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 22:44:38.304347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 22:44:38.304360       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 22:44:38.306537       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 22:44:38.309794       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 22:44:38.312981       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 22:44:38.315307       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 22:44:38.318016       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 22:44:38.324327       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:44:38.324350       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 22:44:38.324360       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 22:44:38.326452       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:44:38.327996       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 22:44:38.328364       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 22:44:38.328501       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 22:44:38.328773       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 22:44:38.328976       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 22:44:38.340422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [ec88d90be5db8f835ebfd94aaa7572b57f28bb99d1a29ce4aeeed1ceff58393b] <==
	I1202 22:43:36.056032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 22:43:36.056079       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 22:43:36.060646       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-618835" podCIDRs=["10.244.0.0/24"]
	I1202 22:43:36.072906       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 22:43:36.082335       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 22:43:36.082501       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 22:43:36.082464       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 22:43:36.082659       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 22:43:36.082760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-618835"
	I1202 22:43:36.082823       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 22:43:36.083085       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 22:43:36.083669       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 22:43:36.084480       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:43:36.084536       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 22:43:36.084576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 22:43:36.084669       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 22:43:36.084966       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 22:43:36.085141       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 22:43:36.085205       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 22:43:36.085722       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 22:43:36.082443       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 22:43:36.091035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 22:43:36.091240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 22:43:36.092905       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 22:44:21.089914       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f5ab902bbd3e8349752d60c5b39dd8d3ac865d4224fbb4ed68370a5c21c0e65] <==
	I1202 22:43:38.931352       1 server_linux.go:53] "Using iptables proxy"
	I1202 22:43:39.008806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 22:43:39.109552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 22:43:39.109658       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 22:43:39.109747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 22:43:39.128452       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 22:43:39.128501       1 server_linux.go:132] "Using iptables Proxier"
	I1202 22:43:39.132715       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 22:43:39.133100       1 server.go:527] "Version info" version="v1.34.2"
	I1202 22:43:39.133135       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 22:43:39.137467       1 config.go:200] "Starting service config controller"
	I1202 22:43:39.137557       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 22:43:39.137949       1 config.go:106] "Starting endpoint slice config controller"
	I1202 22:43:39.138000       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 22:43:39.138101       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 22:43:39.138132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 22:43:39.141827       1 config.go:309] "Starting node config controller"
	I1202 22:43:39.141936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 22:43:39.141970       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 22:43:39.238110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 22:43:39.238232       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 22:43:39.238116       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [9b2e7721431c1c3b5b53d0d676eb5f7747dedd3333f38be16fea087f02e5460b] <==
	I1202 22:44:34.818972       1 server_linux.go:53] "Using iptables proxy"
	I1202 22:44:35.596218       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 22:44:36.097092       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 22:44:36.099573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 22:44:36.099761       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 22:44:36.172152       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 22:44:36.172269       1 server_linux.go:132] "Using iptables Proxier"
	I1202 22:44:36.183315       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 22:44:36.183707       1 server.go:527] "Version info" version="v1.34.2"
	I1202 22:44:36.183895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 22:44:36.185566       1 config.go:200] "Starting service config controller"
	I1202 22:44:36.185625       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 22:44:36.185669       1 config.go:106] "Starting endpoint slice config controller"
	I1202 22:44:36.185698       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 22:44:36.185736       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 22:44:36.185766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 22:44:36.186441       1 config.go:309] "Starting node config controller"
	I1202 22:44:36.189142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 22:44:36.189221       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 22:44:36.286412       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 22:44:36.286515       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 22:44:36.286539       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6d242b830fba6c80e519eb3b37352a49e5280c10c2c87c52cab7798d9d0454ff] <==
	I1202 22:44:34.565228       1 serving.go:386] Generated self-signed cert in-memory
	I1202 22:44:36.505611       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 22:44:36.505713       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 22:44:36.510541       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 22:44:36.510644       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1202 22:44:36.510704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:36.510743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:36.510782       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 22:44:36.510812       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 22:44:36.510982       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 22:44:36.511136       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 22:44:36.611299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 22:44:36.611441       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1202 22:44:36.611597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ed638ee4ec741a39c1b833441fddf86673f676c063b28bf04d941540fd151862] <==
	E1202 22:43:29.113108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 22:43:29.113207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 22:43:29.113263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 22:43:29.113316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 22:43:29.113367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 22:43:29.113450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 22:43:29.119152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 22:43:29.120024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 22:43:29.936231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 22:43:29.973584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 22:43:30.009469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 22:43:30.028438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 22:43:30.088882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 22:43:30.104466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 22:43:30.222790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 22:43:30.306177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 22:43:30.320079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 22:43:30.423731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1202 22:43:33.684011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:23.345800       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 22:44:23.345912       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 22:44:23.345923       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 22:44:23.345945       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 22:44:23.346146       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 22:44:23.346164       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.835505    1320 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-618835\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.895563    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-6zfrp\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="05e122c2-9293-4a8e-98a3-5e285bd382ac" pod="kube-system/kindnet-6zfrp"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.908262    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-q74fb\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="7d073b63-2a81-4541-b874-7d4a252db1eb" pod="kube-system/coredns-66bc5c9577-q74fb"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.918021    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="1435c1a7dede36c2eca1cc73e0abe0d9" pod="kube-system/kube-scheduler-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.924098    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="db8b577fb05deaf5d02da92fa0f0f716" pod="kube-system/etcd-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.925371    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="9e8055dd05b97368ec3993047903f948" pod="kube-system/kube-apiserver-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.957743    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="57b09e366b2d85cc4b90395a127ac73e" pod="kube-system/kube-controller-manager-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.960085    1320 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-618835\" is forbidden: User \"system:node:pause-618835\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-618835' and this object" podUID="57b09e366b2d85cc4b90395a127ac73e" pod="kube-system/kube-controller-manager-pause-618835"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.983225    1320 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         pods "kube-proxy-ntbkx" is forbidden: User "system:node:pause-618835" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-618835' and this object
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Dec 02 22:44:35 pause-618835 kubelet[1320]:  > podUID="9329beee-5733-4a02-9057-d0a11df8846c" pod="kube-system/kube-proxy-ntbkx"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.987776    1320 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         pods "kindnet-6zfrp" is forbidden: User "system:node:pause-618835" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-618835' and this object
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Dec 02 22:44:35 pause-618835 kubelet[1320]:  > podUID="05e122c2-9293-4a8e-98a3-5e285bd382ac" pod="kube-system/kindnet-6zfrp"
	Dec 02 22:44:35 pause-618835 kubelet[1320]: E1202 22:44:35.993246    1320 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         pods "coredns-66bc5c9577-q74fb" is forbidden: User "system:node:pause-618835" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-618835' and this object
	Dec 02 22:44:35 pause-618835 kubelet[1320]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Dec 02 22:44:35 pause-618835 kubelet[1320]:  > podUID="7d073b63-2a81-4541-b874-7d4a252db1eb" pod="kube-system/coredns-66bc5c9577-q74fb"
	Dec 02 22:44:42 pause-618835 kubelet[1320]: W1202 22:44:42.125079    1320 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 02 22:44:46 pause-618835 kubelet[1320]: I1202 22:44:46.352892    1320 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 02 22:44:46 pause-618835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 22:44:46 pause-618835 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 22:44:46 pause-618835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-618835 -n pause-618835
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-618835 -n pause-618835: exit status 2 (448.526535ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-618835 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (7200.072s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-921972 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1202 23:03:42.593746  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 23:05:44.435544  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/default-k8s-diff-port-341144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 23:06:18.469445  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 23:06:39.333538  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 23:07:37.549608  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/old-k8s-version-553454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 23:08:02.410124  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (23m12s)
		TestStartStop (25m11s)
		TestStartStop/group/newest-cni (15m13s)
		TestStartStop/group/newest-cni/serial (15m13s)
		TestStartStop/group/newest-cni/serial/SecondStart (4m48s)
		TestStartStop/group/no-preload (15m42s)
		TestStartStop/group/no-preload/serial (15m42s)
		TestStartStop/group/no-preload/serial/SecondStart (5m33s)

                                                
                                                
goroutine 5472 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000003a40, 0x40006cfbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x4000728060, {0x534c580, 0x2c, 0x2c}, {0x40006cfd08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x400069c0a0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x400069c0a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 4994 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x4000100a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4990
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4700 [chan receive, 15 minutes]:
testing.(*T).Run(0x4000cd3a40, {0x296e9ac?, 0x0?}, 0x40001ad800)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4000cd3a40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4000cd3a40, 0x4001b10580)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4696
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4442 [chan receive, 23 minutes]:
testing.(*T).Run(0x4000cd28c0, {0x296d53a?, 0xd901a7dd4ca?}, 0x40015ca1e0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4000cd28c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4000cd28c0, 0x339b500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5114 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5113
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 167 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40018d6ba0, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 179
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4698 [chan receive, 15 minutes]:
testing.(*T).Run(0x4000cd36c0, {0x296e9ac?, 0x0?}, 0x4001a1a180)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4000cd36c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4000cd36c0, 0x4001b10500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4696
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5489 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0xffff66ad8600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40009208a0?, 0x4000c90362?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40009208a0, {0x4000c90362, 0x49e, 0x49e})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40002f2570, {0x4000c90362?, 0x4001c36568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40016d2870, {0x369ba58, 0x40001102b0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x40016d2870}, {0x369ba58, 0x40001102b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40002f2570?, {0x369bc40, 0x40016d2870})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40002f2570, {0x369bc40, 0x40016d2870})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x40016d2870}, {0x369bad8, 0x40002f2570}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40014ea8c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5488
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4995 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000920cc0, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4990
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 171 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x4000104380}, 0x4001419f40, 0x400095ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x4000104380}, 0x68?, 0x4001419f40, 0x4001419f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x4000104380?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40000d4600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5000 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4999
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 166 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x40018708c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 179
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 170 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40019ba9d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019ba9c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40018d6ba0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000104a80?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x4000104380?}, 0x40000a0ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x4000104380}, 0x400091df38, {0x369d680, 0x4000713e30}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d680?, 0x4000713e30?}, 0x60?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400038aa60, 0x3b9aca00, 0x0, 0x1, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 172 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 171
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5490 [IO wait]:
internal/poll.runtime_pollWait(0xffff66a05c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000920960?, 0x4001d3728b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000920960, {0x4001d3728b, 0x42d75, 0x42d75})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40002f25c8, {0x4001d3728b?, 0x4001c31d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40016d28a0, {0x369ba58, 0x40001102c8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x40016d28a0}, {0x369ba58, 0x40001102c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40002f25c8?, {0x369bc40, 0x40016d28a0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40002f25c8, {0x369bc40, 0x40016d28a0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x40016d28a0}, {0x369bad8, 0x40002f25c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40000d4900?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5488
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4741 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000100a80, 0x40015ca1e0)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4442
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2777 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016cbbc0, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2775
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 991 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 990
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2776 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x400093c700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2775
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4820 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015f9dc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015f9dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015f9dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015f9dc0, 0x4001a1a680)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4999 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x4000104380}, 0x400009ef40, 0x400091ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x4000104380}, 0xa0?, 0x400009ef40, 0x400009ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x4000104380?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40000d4900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4995
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 989 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40006832d0, 0x2a)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006832c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015abd40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000276a10?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x4000104380?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x4000104380}, 0x40013def38, {0x369d680, 0x4001a79680}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x4001a79680?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000cf03d0, 0x3b9aca00, 0x0, 0x1, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 970
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2365 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x4001488d80, 0x4001b8d340)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2364
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 990 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x4000104380}, 0x400009ef40, 0x40013ccf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x4000104380}, 0x50?, 0x400009ef40, 0x400009ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x4000104380?}, 0x4001479080?, 0x4000000a00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400164b680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 970
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 969 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x400164bb00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 968
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5144 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000224080?}, 0x400070b080?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5140
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 970 [chan receive, 110 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015abd40, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 968
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 2506 [IO wait, 98 minutes]:
internal/poll.runtime_pollWait(0xffff66e3ae00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40001ad880?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40001ad880)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40001ad880)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001b10e80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001b10e80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000153300, {0x36d3120, 0x4001b10e80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000153300)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 2504
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4880 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x400162ddc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x400162ddc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400162ddc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x400162ddc0, 0x4001a1ae80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5388 [chan receive, 4 minutes]:
testing.(*T).Run(0x40014ea1c0, {0x297a643?, 0x40000006ee?}, 0x4001a1a280)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40014ea1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40014ea1c0, 0x4001a1a180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4698
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 674 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff66ad9400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40001ada80?, 0x2?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40001ada80)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40001ada80)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40019ba280)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40019ba280)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40002d2800, {0x36d3120, 0x40019ba280})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40002d2800)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 672
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4696 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000cd3180, 0x339b730)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4500
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2931 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0x4001656f00, 0x400157b2d0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2930
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4822 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4000cee540)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4000cee540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4000cee540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4000cee540, 0x4001a1a780)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5440 [syscall, 5 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x400010cb18, 0x4, 0x40002f47e0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x400010cc78?, 0x1929a0?, 0xffffcd5071a1?, 0x0?, 0x400177c0c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001b10400)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x400010cc48?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001656300)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001656300)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400162da40, 0x4001656300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateSecondStart({0x36e5778, 0x4000276b60}, 0x400162da40, {0x40015ac228, 0x11}, {0x180f1065?, 0x180f106500161e84?}, {0x692f700b?, 0x400010cf58?}, {0x4001bce100?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0x90
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x400162da40?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x400162da40, 0x4000488480)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5323
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5475 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0x4001656300, 0x40016ac3f0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5440
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2415 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0x4001439c80, 0x40015db1f0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 793
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4881 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001a3b6c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001a3b6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001a3b6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001a3b6c0, 0x4001a1af00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5488 [syscall, 4 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x15, 0x4000107b18, 0x4, 0x4001514ab0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4000107c78?, 0x1929a0?, 0xffffcd5071a1?, 0x0?, 0x4000ce2c30?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40007c4280)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4000107c48?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40000d4a80)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40000d4a80)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40014ea8c0, 0x40000d4a80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateSecondStart({0x36e5778, 0x40003467e0}, 0x40014ea8c0, {0x40015ac258, 0x11}, {0x1eb5716f?, 0x1eb5716f00161e84?}, {0x692f7038?, 0x4000107f58?}, {0x4001bce200?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0x90
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40014ea8c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40014ea8c0, 0x4001a1a280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5388
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4823 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x400162c540)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x400162c540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400162c540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x400162c540, 0x4001a1a800)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2324 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x400070b080, 0x4001b8c540)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2323
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5323 [chan receive, 5 minutes]:
testing.(*T).Run(0x40006bcfc0, {0x297a643?, 0x40000006ee?}, 0x4000488480)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40006bcfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40006bcfc0, 0x40001ad800)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4700
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5145 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001c20b40, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5140
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 2759 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2758
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2758 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x4000104380}, 0x400141b740, 0x40013d2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x4000104380}, 0xe0?, 0x400141b740, 0x400141b788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x4000104380?}, 0x4001478900?, 0x4000000780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400070b080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2777
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 2499 [select, 98 minutes]:
net/http.(*persistConn).writeLoop(0x4000c98480)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 2480
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 2498 [select, 98 minutes]:
net/http.(*persistConn).readLoop(0x4000c98480)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 2480
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 3322 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x40000d4480, 0x400166bdc0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2707
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5112 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000907ed0, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000907ec0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001c20b40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001d9ae70?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x4000104380?}, 0x40014a2ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x4000104380}, 0x4000919f38, {0x369d680, 0x400151ac60}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40014a2fa8?, {0x369d680?, 0x400151ac60?}, 0xc0?, 0x36e5778?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016c0160, 0x3b9aca00, 0x0, 0x1, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5145
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2907 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0x4000484180, 0x4001a01500)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2906
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5491 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0x40000d4a80, 0x4001a01030)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5488
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5474 [IO wait]:
internal/poll.runtime_pollWait(0xffff66e3b000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001dac3c0?, 0x4002e0752d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001dac3c0, {0x4002e0752d, 0xad3, 0xad3})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4000110338, {0x4002e0752d?, 0x400141a568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001a5e9f0, {0x369ba58, 0x40002f25b0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x4001a5e9f0}, {0x369ba58, 0x40002f25b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4000110338?, {0x369bc40, 0x4001a5e9f0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4000110338, {0x369bc40, 0x4001a5e9f0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x4001a5e9f0}, {0x369bad8, 0x4000110338}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40014ea540?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5440
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4998 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4000906bd0, 0x14)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000906bc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000920cc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004a6540?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x4000104380?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x4000104380}, 0x400091ff38, {0x369d680, 0x4001779710}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x4001779710?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40018317e0, 0x3b9aca00, 0x0, 0x1, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4995
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5473 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0xffff66e3a200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001dac2a0?, 0x400155fb38?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001dac2a0, {0x400155fb38, 0x4c8, 0x4c8})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40001102f8, {0x400155fb38?, 0x400141ad68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001a5e9c0, {0x369ba58, 0x40002f25a8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x4001a5e9c0}, {0x369ba58, 0x40002f25a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40001102f8?, {0x369bc40, 0x4001a5e9c0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40001102f8, {0x369bc40, 0x4001a5e9c0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x4001a5e9c0}, {0x369bad8, 0x40001102f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400162da40?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5440
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4821 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40004956c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40004956c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40004956c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40004956c0, 0x4001a1a700)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4500 [chan receive, 27 minutes]:
testing.(*T).Run(0x40015f88c0, {0x296d53a?, 0x40013cdf58?}, 0x339b730)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40015f88c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40015f88c0, 0x339b548)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5113 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x4000104380}, 0x4000cd9f40, 0x4000cd9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x4000104380}, 0xa0?, 0x4000cd9f40, 0x4000cd9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x4000104380?}, 0x40004a6150?, 0x4000448690?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001656780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5145
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4742 [chan receive, 23 minutes]:
testing.(*testState).waitParallel(0x4000714780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4000101180)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4000101180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4000101180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4000101180, 0x4001a1a100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4741
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2757 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4000682b10, 0x22)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000682b00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016cbbc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000277570?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x4000104380?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x4000104380}, 0x40013cff38, {0x369d680, 0x4001a63a40}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x4001a63a40?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016c12a0, 0x3b9aca00, 0x0, 0x1, 0x4000104380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2777
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                    

Test pass (224/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.1
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 31.42
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.06
18 TestDownloadOnly/v1.34.2/DeleteAll 0.2
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.31
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 144.38
40 TestAddons/serial/GCPAuth/Namespaces 0.23
41 TestAddons/serial/GCPAuth/FakeCredentials 10.77
57 TestAddons/StoppedEnableDisable 12.4
58 TestCertOptions 42.58
59 TestCertExpiration 248.15
61 TestForceSystemdFlag 46.26
62 TestForceSystemdEnv 42.45
67 TestErrorSpam/setup 32.75
68 TestErrorSpam/start 0.79
69 TestErrorSpam/status 1.09
70 TestErrorSpam/pause 5.89
71 TestErrorSpam/unpause 5.33
72 TestErrorSpam/stop 1.51
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.13
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.64
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.72
84 TestFunctional/serial/CacheCmd/cache/add_local 1.08
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 34.06
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.43
95 TestFunctional/serial/LogsFileCmd 1.53
96 TestFunctional/serial/InvalidService 4.36
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 10.04
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.35
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 24.5
110 TestFunctional/parallel/SSHCmd 0.71
111 TestFunctional/parallel/CpCmd 2.06
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.14
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
122 TestFunctional/parallel/License 0.42
123 TestFunctional/parallel/Version/short 0.08
124 TestFunctional/parallel/Version/components 0.82
125 TestFunctional/parallel/ImageCommands/ImageListShort 2.27
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.32
130 TestFunctional/parallel/ImageCommands/Setup 0.64
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
138 TestFunctional/parallel/ProfileCmd/profile_list 0.56
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.39
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/parallel/MountCmd/any-port 8.17
157 TestFunctional/parallel/MountCmd/specific-port 2.12
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
159 TestFunctional/parallel/ServiceCmd/List 0.63
160 TestFunctional/parallel/ServiceCmd/JSONOutput 0.7
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.55
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.95
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.86
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.03
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.4
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.45
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.21
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.56
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.71
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.7
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.8
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.34
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.81
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.14
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.68
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.81
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.38
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 180
265 TestMultiControlPlane/serial/DeployApp 7.07
266 TestMultiControlPlane/serial/PingHostFromPods 1.53
267 TestMultiControlPlane/serial/AddWorkerNode 59.14
268 TestMultiControlPlane/serial/NodeLabels 0.13
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
270 TestMultiControlPlane/serial/CopyFile 19.96
271 TestMultiControlPlane/serial/StopSecondaryNode 12.87
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
273 TestMultiControlPlane/serial/RestartSecondaryNode 28.08
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 149.32
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.01
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
278 TestMultiControlPlane/serial/StopCluster 36.15
279 TestMultiControlPlane/serial/RestartCluster 84.03
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
281 TestMultiControlPlane/serial/AddSecondaryNode 84.8
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
287 TestJSONOutput/start/Command 80.35
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.14
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 60.91
313 TestKicCustomNetwork/use_default_bridge_network 35.59
314 TestKicExistingNetwork 33.71
315 TestKicCustomSubnet 36.35
316 TestKicStaticIP 34.78
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 69.89
321 TestMountStart/serial/StartWithMountFirst 8.9
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.6
324 TestMountStart/serial/VerifyMountSecond 0.29
325 TestMountStart/serial/DeleteFirst 1.7
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 8
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 141.58
333 TestMultiNode/serial/DeployApp2Nodes 4.84
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 57.62
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.72
338 TestMultiNode/serial/CopyFile 10.53
339 TestMultiNode/serial/StopNode 2.5
340 TestMultiNode/serial/StartAfterStop 8.27
341 TestMultiNode/serial/RestartKeepsNodes 79.99
342 TestMultiNode/serial/DeleteNode 5.65
343 TestMultiNode/serial/StopMultiNode 24.03
344 TestMultiNode/serial/RestartMultiNode 58.06
345 TestMultiNode/serial/ValidateNameConflict 36.97
350 TestPreload 117.28
352 TestScheduledStopUnix 107.78
355 TestInsufficientStorage 12.61
356 TestRunningBinaryUpgrade 300.63
359 TestMissingContainerUpgrade 141.71
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 42.34
363 TestNoKubernetes/serial/StartWithStopK8s 6.7
364 TestNoKubernetes/serial/Start 9.54
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
367 TestNoKubernetes/serial/ProfileList 0.99
368 TestNoKubernetes/serial/Stop 1.32
369 TestNoKubernetes/serial/StartNoArgs 8.39
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
371 TestStoppedBinaryUpgrade/Setup 11.09
372 TestStoppedBinaryUpgrade/Upgrade 298.06
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.6
382 TestPause/serial/Start 84.19
383 TestPause/serial/SecondStartNoReconfiguration 24.2
x
+
TestDownloadOnly/v1.28.0/json-events (8.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-227195 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-227195 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.101699007s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 21:08:16.473496  447211 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1202 21:08:16.473575  447211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-227195
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-227195: exit status 85 (93.651652ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-227195 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-227195 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:08:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:08:08.417313  447216 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:08:08.417431  447216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:08.417442  447216 out.go:374] Setting ErrFile to fd 2...
	I1202 21:08:08.417447  447216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:08.417686  447216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	W1202 21:08:08.417843  447216 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-444114/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-444114/.minikube/config/config.json: no such file or directory
	I1202 21:08:08.418243  447216 out.go:368] Setting JSON to true
	I1202 21:08:08.419085  447216 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10217,"bootTime":1764699472,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:08:08.419154  447216 start.go:143] virtualization:  
	I1202 21:08:08.423257  447216 out.go:99] [download-only-227195] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1202 21:08:08.423428  447216 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 21:08:08.423555  447216 notify.go:221] Checking for updates...
	I1202 21:08:08.426100  447216 out.go:171] MINIKUBE_LOCATION=21997
	I1202 21:08:08.428285  447216 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:08:08.430341  447216 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:08:08.432611  447216 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:08:08.435080  447216 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 21:08:08.439999  447216 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 21:08:08.440331  447216 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:08:08.463613  447216 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:08:08.463731  447216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:08.529129  447216 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-02 21:08:08.519144778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:08.529240  447216 docker.go:319] overlay module found
	I1202 21:08:08.531951  447216 out.go:99] Using the docker driver based on user configuration
	I1202 21:08:08.532006  447216 start.go:309] selected driver: docker
	I1202 21:08:08.532027  447216 start.go:927] validating driver "docker" against <nil>
	I1202 21:08:08.532175  447216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:08.592661  447216 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-02 21:08:08.583095277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:08.592811  447216 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:08:08.593091  447216 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 21:08:08.593240  447216 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 21:08:08.596127  447216 out.go:171] Using Docker driver with root privileges
	I1202 21:08:08.599037  447216 cni.go:84] Creating CNI manager for ""
	I1202 21:08:08.599115  447216 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:08:08.599129  447216 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 21:08:08.599214  447216 start.go:353] cluster config:
	{Name:download-only-227195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-227195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:08:08.601920  447216 out.go:99] Starting "download-only-227195" primary control-plane node in "download-only-227195" cluster
	I1202 21:08:08.601940  447216 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:08:08.604732  447216 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:08:08.604788  447216 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 21:08:08.604901  447216 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:08:08.623691  447216 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:08:08.623710  447216 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 21:08:08.623875  447216 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 21:08:08.623981  447216 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 21:08:08.660126  447216 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1202 21:08:08.660154  447216 cache.go:65] Caching tarball of preloaded images
	I1202 21:08:08.660310  447216 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 21:08:08.663367  447216 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 21:08:08.663395  447216 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1202 21:08:08.753637  447216 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1202 21:08:08.753805  447216 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-227195 host does not exist
	  To start a cluster, run: "minikube start -p download-only-227195"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-227195
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (31.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-304980 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-304980 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (31.419792433s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (31.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 21:08:48.340452  447211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1202 21:08:48.340487  447211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-304980
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-304980: exit status 85 (63.834864ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-227195 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-227195 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-227195                                                                                                                                                   │ download-only-227195 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ -o=json --download-only -p download-only-304980 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-304980 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:08:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:08:16.964385  447416 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:08:16.964560  447416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:16.964574  447416 out.go:374] Setting ErrFile to fd 2...
	I1202 21:08:16.964579  447416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:16.964828  447416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:08:16.965263  447416 out.go:368] Setting JSON to true
	I1202 21:08:16.966030  447416 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10225,"bootTime":1764699472,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:08:16.966099  447416 start.go:143] virtualization:  
	I1202 21:08:16.969329  447416 out.go:99] [download-only-304980] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:08:16.969519  447416 notify.go:221] Checking for updates...
	I1202 21:08:16.972482  447416 out.go:171] MINIKUBE_LOCATION=21997
	I1202 21:08:16.975468  447416 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:08:16.978444  447416 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:08:16.981286  447416 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:08:16.984125  447416 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 21:08:16.989803  447416 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 21:08:16.990050  447416 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:08:17.024512  447416 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:08:17.024685  447416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:17.087883  447416 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-02 21:08:17.078027323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:17.088000  447416 docker.go:319] overlay module found
	I1202 21:08:17.091123  447416 out.go:99] Using the docker driver based on user configuration
	I1202 21:08:17.091165  447416 start.go:309] selected driver: docker
	I1202 21:08:17.091181  447416 start.go:927] validating driver "docker" against <nil>
	I1202 21:08:17.091302  447416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:17.147113  447416 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-02 21:08:17.137763503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:17.147273  447416 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:08:17.147554  447416 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 21:08:17.147709  447416 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 21:08:17.150893  447416 out.go:171] Using Docker driver with root privileges
	I1202 21:08:17.153600  447416 cni.go:84] Creating CNI manager for ""
	I1202 21:08:17.153671  447416 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 21:08:17.153686  447416 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 21:08:17.153758  447416 start.go:353] cluster config:
	{Name:download-only-304980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-304980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:08:17.156672  447416 out.go:99] Starting "download-only-304980" primary control-plane node in "download-only-304980" cluster
	I1202 21:08:17.156691  447416 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 21:08:17.159493  447416 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:08:17.159528  447416 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:08:17.159616  447416 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:08:17.178297  447416 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:08:17.178329  447416 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 21:08:17.178436  447416 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 21:08:17.178457  447416 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 21:08:17.178461  447416 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 21:08:17.178473  447416 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 21:08:17.222840  447416 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 21:08:17.222886  447416 cache.go:65] Caching tarball of preloaded images
	I1202 21:08:17.223063  447416 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 21:08:17.226210  447416 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 21:08:17.226238  447416 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1202 21:08:17.316020  447416 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1202 21:08:17.316092  447416 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/21997-444114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-304980 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304980"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-304980
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-215360 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-215360 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.313399052s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-215360
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-215360: exit status 85 (64.588021ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-227195 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-227195 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-227195                                                                                                                                                          │ download-only-227195 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ -o=json --download-only -p download-only-304980 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-304980 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ delete  │ -p download-only-304980                                                                                                                                                          │ download-only-304980 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │ 02 Dec 25 21:08 UTC │
	│ start   │ -o=json --download-only -p download-only-215360 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-215360 │ jenkins │ v1.37.0 │ 02 Dec 25 21:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:08:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:08:48.779555  447613 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:08:48.779693  447613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:48.779705  447613 out.go:374] Setting ErrFile to fd 2...
	I1202 21:08:48.779710  447613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:08:48.779949  447613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:08:48.780376  447613 out.go:368] Setting JSON to true
	I1202 21:08:48.781142  447613 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10257,"bootTime":1764699472,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:08:48.781213  447613 start.go:143] virtualization:  
	I1202 21:08:48.782823  447613 out.go:99] [download-only-215360] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:08:48.783126  447613 notify.go:221] Checking for updates...
	I1202 21:08:48.784748  447613 out.go:171] MINIKUBE_LOCATION=21997
	I1202 21:08:48.785971  447613 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:08:48.787371  447613 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:08:48.788747  447613 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:08:48.789910  447613 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 21:08:48.792079  447613 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 21:08:48.792321  447613 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:08:48.813652  447613 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:08:48.813862  447613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:48.877575  447613 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:08:48.868385137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:48.877680  447613 docker.go:319] overlay module found
	I1202 21:08:48.879038  447613 out.go:99] Using the docker driver based on user configuration
	I1202 21:08:48.879079  447613 start.go:309] selected driver: docker
	I1202 21:08:48.879095  447613 start.go:927] validating driver "docker" against <nil>
	I1202 21:08:48.879205  447613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:08:48.932781  447613 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:08:48.923689987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:08:48.932944  447613 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:08:48.933253  447613 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 21:08:48.933431  447613 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 21:08:48.934936  447613 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-215360 host does not exist
	  To start a cluster, run: "minikube start -p download-only-215360"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-215360
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 21:08:52.546459  447211 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-045307 --alsologtostderr --binary-mirror http://127.0.0.1:40293 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-045307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-045307
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-656754
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-656754: exit status 85 (76.862276ms)

                                                
                                                
-- stdout --
	* Profile "addons-656754" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-656754"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-656754
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-656754: exit status 85 (80.357153ms)

                                                
                                                
-- stdout --
	* Profile "addons-656754" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-656754"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (144.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-656754 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-656754 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m24.377540192s)
--- PASS: TestAddons/Setup (144.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-656754 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-656754 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.77s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-656754 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-656754 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aec0c877-dadb-4408-ab20-ebbc93df527e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aec0c877-dadb-4408-ab20-ebbc93df527e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003183707s
addons_test.go:694: (dbg) Run:  kubectl --context addons-656754 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-656754 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-656754 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-656754 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-656754
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-656754: (12.120629376s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-656754
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-656754
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-656754
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (42.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-171913 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1202 22:46:18.471395  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-171913 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.712546778s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-171913 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-171913 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-171913 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-171913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-171913: (2.121509561s)
--- PASS: TestCertOptions (42.58s)

                                                
                                    
x
+
TestCertExpiration (248.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-196243 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-196243 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.943905399s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-196243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-196243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.251321508s)
helpers_test.go:175: Cleaning up "cert-expiration-196243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-196243
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-196243: (2.948352596s)
--- PASS: TestCertExpiration (248.15s)

                                                
                                    
x
+
TestForceSystemdFlag (46.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-357786 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-357786 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.733439498s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-357786 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-357786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-357786
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-357786: (3.12404886s)
--- PASS: TestForceSystemdFlag (46.26s)

                                                
                                    
x
+
TestForceSystemdEnv (42.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-200749 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-200749 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.885365875s)
helpers_test.go:175: Cleaning up "force-systemd-env-200749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-200749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-200749: (2.568351731s)
--- PASS: TestForceSystemdEnv (42.45s)

                                                
                                    
x
+
TestErrorSpam/setup (32.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-784632 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-784632 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-784632 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-784632 --driver=docker  --container-runtime=crio: (32.754621543s)
--- PASS: TestErrorSpam/setup (32.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (5.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause: exit status 80 (1.741789279s)

                                                
                                                
-- stdout --
	* Pausing node nospam-784632 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:15:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause: exit status 80 (2.353855978s)

                                                
                                                
-- stdout --
	* Pausing node nospam-784632 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:15:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause: exit status 80 (1.788384601s)

                                                
                                                
-- stdout --
	* Pausing node nospam-784632 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:15:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause: exit status 80 (1.796217132s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-784632 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:15:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause: exit status 80 (1.947425823s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-784632 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:15:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause: exit status 80 (1.584927264s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-784632 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T21:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.33s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 stop: (1.313718933s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-784632 --log_dir /tmp/nospam-784632 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-218190 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1202 21:16:18.475601  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:18.482213  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:18.493606  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:18.515021  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:18.556486  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:18.637935  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:18.799412  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:19.121591  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:19.763182  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:21.045433  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:23.607133  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:28.728902  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:38.970200  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:16:59.451651  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-218190 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.132494557s)
--- PASS: TestFunctional/serial/StartWithProxy (80.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 21:17:16.149970  447211 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-218190 --alsologtostderr -v=8
E1202 21:17:40.413913  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-218190 --alsologtostderr -v=8: (29.637108224s)
functional_test.go:678: soft start took 29.642977382s for "functional-218190" cluster.
I1202 21:17:45.792755  447211 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (29.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-218190 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 cache add registry.k8s.io/pause:3.1: (1.256781282s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 cache add registry.k8s.io/pause:3.3: (1.292343658s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 cache add registry.k8s.io/pause:latest: (1.172423787s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-218190 /tmp/TestFunctionalserialCacheCmdcacheadd_local2732935249/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cache add minikube-local-cache-test:functional-218190
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cache delete minikube-local-cache-test:functional-218190
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-218190
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.88853ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 kubectl -- --context functional-218190 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-218190 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-218190 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-218190 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.055586645s)
functional_test.go:776: restart took 34.055695396s for "functional-218190" cluster.
I1202 21:18:27.462949  447211 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (34.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-218190 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 logs: (1.431202276s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 logs --file /tmp/TestFunctionalserialLogsFileCmd782173672/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 logs --file /tmp/TestFunctionalserialLogsFileCmd782173672/001/logs.txt: (1.527280407s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-218190 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-218190
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-218190: exit status 115 (386.152316ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32547 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-218190 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 config get cpus: exit status 14 (97.865213ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 config get cpus: exit status 14 (71.63735ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-218190 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-218190 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 475049: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.698251ms)

                                                
                                                
-- stdout --
	* [functional-218190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:28:55.878335  472730 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:28:55.878483  472730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:28:55.878494  472730 out.go:374] Setting ErrFile to fd 2...
	I1202 21:28:55.878500  472730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:28:55.878753  472730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:28:55.879153  472730 out.go:368] Setting JSON to false
	I1202 21:28:55.880018  472730 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11464,"bootTime":1764699472,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:28:55.880127  472730 start.go:143] virtualization:  
	I1202 21:28:55.883688  472730 out.go:179] * [functional-218190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:28:55.886593  472730 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:28:55.886692  472730 notify.go:221] Checking for updates...
	I1202 21:28:55.892578  472730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:28:55.895549  472730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:28:55.898393  472730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:28:55.901224  472730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:28:55.904053  472730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:28:55.907315  472730 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:28:55.907907  472730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:28:55.931437  472730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:28:55.931557  472730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:28:56.004810  472730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:28:55.986161477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:28:56.004936  472730 docker.go:319] overlay module found
	I1202 21:28:56.008224  472730 out.go:179] * Using the docker driver based on existing profile
	I1202 21:28:56.011389  472730 start.go:309] selected driver: docker
	I1202 21:28:56.011426  472730 start.go:927] validating driver "docker" against &{Name:functional-218190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:28:56.011569  472730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:28:56.016056  472730 out.go:203] 
	W1202 21:28:56.018977  472730 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 21:28:56.021819  472730 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-218190 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-218190 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (232.884193ms)

                                                
                                                
-- stdout --
	* [functional-218190] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:29:09.325386  474640 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:09.325595  474640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:09.325627  474640 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:09.325647  474640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:09.326018  474640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:29:09.326396  474640 out.go:368] Setting JSON to false
	I1202 21:29:09.327335  474640 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11478,"bootTime":1764699472,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:29:09.327425  474640 start.go:143] virtualization:  
	I1202 21:29:09.330827  474640 out.go:179] * [functional-218190] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 21:29:09.334583  474640 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:09.334686  474640 notify.go:221] Checking for updates...
	I1202 21:29:09.340357  474640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:09.343301  474640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:29:09.346234  474640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:29:09.349178  474640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:09.352281  474640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:09.355727  474640 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 21:29:09.356322  474640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:09.380881  474640 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:09.381022  474640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:09.481999  474640 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:09.472111492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:09.482108  474640 docker.go:319] overlay module found
	I1202 21:29:09.487050  474640 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 21:29:09.489898  474640 start.go:309] selected driver: docker
	I1202 21:29:09.489918  474640 start.go:927] validating driver "docker" against &{Name:functional-218190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-218190 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:09.490019  474640 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:09.493680  474640 out.go:203] 
	W1202 21:29:09.496692  474640 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 21:29:09.499642  474640 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1b27243c-04f5-4bd3-8e4c-5e043501d2e3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003717071s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-218190 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-218190 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-218190 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-218190 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b7804e55-e44d-4df1-bd1f-530912c27e26] Pending
helpers_test.go:352: "sp-pod" [b7804e55-e44d-4df1-bd1f-530912c27e26] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b7804e55-e44d-4df1-bd1f-530912c27e26] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.010042022s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-218190 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-218190 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-218190 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [54f092c4-dd59-49e3-9b11-89b15d0715ad] Pending
helpers_test.go:352: "sp-pod" [54f092c4-dd59-49e3-9b11-89b15d0715ad] Running
E1202 21:19:02.335747  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003390243s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-218190 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh -n functional-218190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cp functional-218190:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1694082596/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh -n functional-218190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh -n functional-218190 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/447211/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /etc/test/nested/copy/447211/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/447211.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /etc/ssl/certs/447211.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/447211.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /usr/share/ca-certificates/447211.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4472112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /etc/ssl/certs/4472112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4472112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /usr/share/ca-certificates/4472112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-218190 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh "sudo systemctl is-active docker": exit status 1 (445.171792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh "sudo systemctl is-active containerd": exit status 1 (374.895343ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 image ls --format short --alsologtostderr: (2.274210223s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-218190 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-218190 image ls --format short --alsologtostderr:
I1202 21:29:16.139698  475853 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:16.139881  475853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:16.139908  475853 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:16.139928  475853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:16.140218  475853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:29:16.141917  475853 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:16.147451  475853 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:16.148129  475853 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
I1202 21:29:16.167276  475853 ssh_runner.go:195] Run: systemctl --version
I1202 21:29:16.167338  475853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
I1202 21:29:16.188493  475853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
I1202 21:29:16.293936  475853 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 21:29:18.322027  475853 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.028035369s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-218190 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-218190 image ls --format table --alsologtostderr:
I1202 21:29:20.065352  476107 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:20.065505  476107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:20.065529  476107 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:20.065537  476107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:20.065850  476107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:29:20.066548  476107 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:20.066739  476107 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:20.067350  476107 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
I1202 21:29:20.085971  476107 ssh_runner.go:195] Run: systemctl --version
I1202 21:29:20.086032  476107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
I1202 21:29:20.106059  476107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
I1202 21:29:20.213663  476107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-218190 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a
944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","
registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01
b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47
ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metric
s-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-218190 image ls --format json --alsologtostderr:
I1202 21:29:19.826249  476072 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:19.826415  476072 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:19.826436  476072 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:19.826457  476072 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:19.826719  476072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:29:19.827391  476072 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:19.827547  476072 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:19.828106  476072 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
I1202 21:29:19.845622  476072 ssh_runner.go:195] Run: systemctl --version
I1202 21:29:19.845679  476072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
I1202 21:29:19.863737  476072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
I1202 21:29:19.965737  476072 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-218190 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-218190 image ls --format yaml --alsologtostderr:
I1202 21:29:19.593106  476037 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:19.593233  476037 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:19.593244  476037 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:19.593249  476037 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:19.593503  476037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:29:19.594137  476037 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:19.594261  476037 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:19.594784  476037 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
I1202 21:29:19.613631  476037 ssh_runner.go:195] Run: systemctl --version
I1202 21:29:19.613699  476037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
I1202 21:29:19.637884  476037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
I1202 21:29:19.741621  476037 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh pgrep buildkitd: exit status 1 (321.747234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr
2025/12/02 21:29:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr: (3.744266575s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7d649b11316
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-218190
--> a2e85a4ba35
Successfully tagged localhost/my-image:functional-218190
a2e85a4ba35c6e2780a88c3aa8b283638975a61c97fc9f147f8586895e1b3e29
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-218190 image build -t localhost/my-image:functional-218190 testdata/build --alsologtostderr:
I1202 21:29:18.716347  475984 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:18.717088  475984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:18.717126  475984 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:18.717144  475984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:18.717437  475984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:29:18.718145  475984 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:18.718856  475984 config.go:182] Loaded profile config "functional-218190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 21:29:18.719486  475984 cli_runner.go:164] Run: docker container inspect functional-218190 --format={{.State.Status}}
I1202 21:29:18.740636  475984 ssh_runner.go:195] Run: systemctl --version
I1202 21:29:18.740688  475984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-218190
I1202 21:29:18.761661  475984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-218190/id_rsa Username:docker}
I1202 21:29:18.870230  475984 build_images.go:162] Building image from path: /tmp/build.3374015972.tar
I1202 21:29:18.870312  475984 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 21:29:18.879399  475984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3374015972.tar
I1202 21:29:18.883639  475984 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3374015972.tar: stat -c "%s %y" /var/lib/minikube/build/build.3374015972.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3374015972.tar': No such file or directory
I1202 21:29:18.883674  475984 ssh_runner.go:362] scp /tmp/build.3374015972.tar --> /var/lib/minikube/build/build.3374015972.tar (3072 bytes)
I1202 21:29:18.902537  475984 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3374015972
I1202 21:29:18.911913  475984 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3374015972 -xf /var/lib/minikube/build/build.3374015972.tar
I1202 21:29:18.920791  475984 crio.go:315] Building image: /var/lib/minikube/build/build.3374015972
I1202 21:29:18.920865  475984 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-218190 /var/lib/minikube/build/build.3374015972 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1202 21:29:22.372330  475984 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-218190 /var/lib/minikube/build/build.3374015972 --cgroup-manager=cgroupfs: (3.451439066s)
I1202 21:29:22.372412  475984 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3374015972
I1202 21:29:22.380635  475984 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3374015972.tar
I1202 21:29:22.388980  475984 build_images.go:218] Built localhost/my-image:functional-218190 from /tmp/build.3374015972.tar
I1202 21:29:22.389010  475984 build_images.go:134] succeeded building to: functional-218190
I1202 21:29:22.389015  475984 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-218190
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "490.444981ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "65.495426ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "458.905219ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "73.492787ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image rm kicbase/echo-server:functional-218190 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-218190 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-218190 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-218190 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-218190 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 471198: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-218190 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-218190 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c6f67c3e-82ce-42b3-b3eb-a5c47bce0762] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c6f67c3e-82ce-42b3-b3eb-a5c47bce0762] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003296145s
I1202 21:18:51.979363  447211 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-218190 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.145.220 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-218190 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdany-port221705005/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764710936298131762" to /tmp/TestFunctionalparallelMountCmdany-port221705005/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764710936298131762" to /tmp/TestFunctionalparallelMountCmdany-port221705005/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764710936298131762" to /tmp/TestFunctionalparallelMountCmdany-port221705005/001/test-1764710936298131762
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.13391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:28:56.678549  447211 retry.go:31] will retry after 734.045012ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 21:28 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 21:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 21:28 test-1764710936298131762
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh cat /mount-9p/test-1764710936298131762
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-218190 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e8dba0d0-8ee0-4045-a49e-c586cf021552] Pending
helpers_test.go:352: "busybox-mount" [e8dba0d0-8ee0-4045-a49e-c586cf021552] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e8dba0d0-8ee0-4045-a49e-c586cf021552] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e8dba0d0-8ee0-4045-a49e-c586cf021552] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003010904s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-218190 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdany-port221705005/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdspecific-port3507260626/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.271296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:29:04.806621  447211 retry.go:31] will retry after 723.735501ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdspecific-port3507260626/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-218190 ssh "sudo umount -f /mount-9p": exit status 1 (285.055739ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-218190 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdspecific-port3507260626/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-218190 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-218190 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2502306404/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-218190 service list -o json
functional_test.go:1504: Took "701.172729ms" to run "out/minikube-linux-arm64 -p functional-218190 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-218190
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-218190
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-218190
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-444114/.minikube/files/etc/test/nested/copy/447211/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 cache add registry.k8s.io/pause:3.1: (1.230917736s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 cache add registry.k8s.io/pause:3.3: (1.192077566s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 cache add registry.k8s.io/pause:latest: (1.129707374s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach995674172/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cache add minikube-local-cache-test:functional-066896
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cache delete minikube-local-cache-test:functional-066896
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-066896
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (326.272024ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs176787459/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs176787459/001/logs.txt: (1.028203067s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 config get cpus: exit status 14 (62.485684ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 config get cpus: exit status 14 (72.621746ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (191.513959ms)

                                                
                                                
-- stdout --
	* [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:58:44.024953  507093 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:58:44.025151  507093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:44.025183  507093 out.go:374] Setting ErrFile to fd 2...
	I1202 21:58:44.025202  507093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:44.025504  507093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:58:44.026004  507093 out.go:368] Setting JSON to false
	I1202 21:58:44.026926  507093 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13252,"bootTime":1764699472,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:58:44.027073  507093 start.go:143] virtualization:  
	I1202 21:58:44.030448  507093 out.go:179] * [functional-066896] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:58:44.033457  507093 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:58:44.033601  507093 notify.go:221] Checking for updates...
	I1202 21:58:44.039299  507093 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:58:44.042192  507093 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:58:44.045052  507093 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:58:44.047937  507093 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:58:44.051048  507093 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:58:44.054523  507093 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:58:44.055318  507093 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:58:44.083108  507093 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:58:44.083277  507093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:58:44.140869  507093 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:44.131666106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:58:44.140979  507093 docker.go:319] overlay module found
	I1202 21:58:44.144057  507093 out.go:179] * Using the docker driver based on existing profile
	I1202 21:58:44.146861  507093 start.go:309] selected driver: docker
	I1202 21:58:44.146876  507093 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:58:44.146977  507093 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:58:44.150629  507093 out.go:203] 
	W1202 21:58:44.153417  507093 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 21:58:44.156205  507093 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-066896 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-066896 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (210.512448ms)

                                                
                                                
-- stdout --
	* [functional-066896] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:58:46.901861  507730 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:58:46.902020  507730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:46.902049  507730 out.go:374] Setting ErrFile to fd 2...
	I1202 21:58:46.902055  507730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:58:46.902463  507730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 21:58:46.902886  507730 out.go:368] Setting JSON to false
	I1202 21:58:46.903818  507730 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13255,"bootTime":1764699472,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1202 21:58:46.903890  507730 start.go:143] virtualization:  
	I1202 21:58:46.907131  507730 out.go:179] * [functional-066896] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 21:58:46.910758  507730 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:58:46.910832  507730 notify.go:221] Checking for updates...
	I1202 21:58:46.916328  507730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:58:46.919207  507730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	I1202 21:58:46.922097  507730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	I1202 21:58:46.924927  507730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:58:46.927693  507730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:58:46.931080  507730 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 21:58:46.931712  507730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:58:46.967128  507730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:58:46.967244  507730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:58:47.036134  507730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:58:47.026846878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:58:47.036254  507730 docker.go:319] overlay module found
	I1202 21:58:47.039414  507730 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 21:58:47.042260  507730 start.go:309] selected driver: docker
	I1202 21:58:47.042282  507730 start.go:927] validating driver "docker" against &{Name:functional-066896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-066896 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:58:47.042390  507730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:58:47.045971  507730 out.go:203] 
	W1202 21:58:47.048833  507730 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 21:58:47.051708  507730 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh -n functional-066896 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cp functional-066896:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3183472780/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh -n functional-066896 "sudo cat /home/docker/cp-test.txt"
I1202 21:56:49.474592  447211 retry.go:31] will retry after 2.16363215s: Temporary Error: Get "http://10.101.145.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh -n functional-066896 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/447211/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /etc/test/nested/copy/447211/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/447211.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /etc/ssl/certs/447211.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/447211.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /usr/share/ca-certificates/447211.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4472112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /etc/ssl/certs/4472112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4472112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /usr/share/ca-certificates/4472112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "sudo systemctl is-active docker": exit status 1 (453.085377ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "sudo systemctl is-active containerd": exit status 1 (344.560159ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-066896 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-066896 image ls --format short --alsologtostderr:
I1202 21:58:49.398962  508254 out.go:360] Setting OutFile to fd 1 ...
I1202 21:58:49.399161  508254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:49.399171  508254 out.go:374] Setting ErrFile to fd 2...
I1202 21:58:49.399177  508254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:49.399440  508254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:58:49.400115  508254 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:49.400244  508254 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:49.400775  508254 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:58:49.426837  508254 ssh_runner.go:195] Run: systemctl --version
I1202 21:58:49.426936  508254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:58:49.445937  508254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
I1202 21:58:49.549702  508254 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-066896 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1            │ d7b100cd9a77b │ 517kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 66749159455b3 │ 29MB   │
│ localhost/my-image                      │ functional-066896 │ 58acbfc86324d │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.1               │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest            │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest            │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ ccd634d9bcc36 │ 84.9MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 404c2e1286177 │ 74.1MB │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-066896 image ls --format table --alsologtostderr:
I1202 21:58:53.907213  508745 out.go:360] Setting OutFile to fd 1 ...
I1202 21:58:53.907354  508745 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:53.907366  508745 out.go:374] Setting ErrFile to fd 2...
I1202 21:58:53.907372  508745 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:53.907642  508745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:58:53.908275  508745 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:53.908442  508745 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:53.909056  508745 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:58:53.929173  508745 ssh_runner.go:195] Run: systemctl --version
I1202 21:58:53.929228  508745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:58:53.951290  508745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
I1202 21:58:54.057828  508745 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-066896 image ls --format json --alsologtostderr:
[{"id":"58acbfc86324d5017011f92aab80a5f8aa1abe913efd2716858060c1bc33bfba","repoDigests":["localhost/my-image@sha256:93ac52091d8d037f122757d07798e1b7d41f840d31943182a3b0335ac665278c"],"repoTags":["localhost/my-image:functional-066896"],"size":"1640791"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74488375"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72167568"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"],"repoTag
s":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49819792"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"54decbe424837da74d574e85d38b42b5e1fbbff3626f94e8e51e11fea02c3633","repoDigests":["docker.io/library/03ed9fd9d095db0a641edba3ccf390a1c68be8697acd25c6ef3980b67c103b10-tmp@sha256:70db66ac59f2e18dd80877824552cb87309a5a2f72edde0e332db24530fe313a"],"repoTags":[],"size":"1638179"},{"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65
b2087d960e03e16a13bb4070fb6ba6fee7825"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60854229"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84947242"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74105124"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93
d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-066896 image ls --format json --alsologtostderr:
I1202 21:58:53.681091  508709 out.go:360] Setting OutFile to fd 1 ...
I1202 21:58:53.681264  508709 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:53.681276  508709 out.go:374] Setting ErrFile to fd 2...
I1202 21:58:53.681282  508709 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:53.681540  508709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:58:53.682197  508709 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:53.682352  508709 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:53.682924  508709 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:58:53.699685  508709 ssh_runner.go:195] Run: systemctl --version
I1202 21:58:53.699749  508709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:58:53.715858  508709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
I1202 21:58:53.817637  508709 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-066896 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74488375"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60854229"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84947242"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72167568"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74105124"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49819792"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-066896 image ls --format yaml --alsologtostderr:
I1202 21:58:49.641241  508295 out.go:360] Setting OutFile to fd 1 ...
I1202 21:58:49.641421  508295 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:49.641434  508295 out.go:374] Setting ErrFile to fd 2...
I1202 21:58:49.641439  508295 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:49.641680  508295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:58:49.642289  508295 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:49.642414  508295 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:49.642969  508295 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:58:49.660528  508295 ssh_runner.go:195] Run: systemctl --version
I1202 21:58:49.660595  508295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:58:49.678062  508295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
I1202 21:58:49.781841  508295 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh pgrep buildkitd: exit status 1 (260.073848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image build -t localhost/my-image:functional-066896 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-066896 image build -t localhost/my-image:functional-066896 testdata/build --alsologtostderr: (3.336039758s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-066896 image build -t localhost/my-image:functional-066896 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 54decbe4248
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-066896
--> 58acbfc8632
Successfully tagged localhost/my-image:functional-066896
58acbfc86324d5017011f92aab80a5f8aa1abe913efd2716858060c1bc33bfba
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-066896 image build -t localhost/my-image:functional-066896 testdata/build --alsologtostderr:
I1202 21:58:50.133085  508396 out.go:360] Setting OutFile to fd 1 ...
I1202 21:58:50.133216  508396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:50.133226  508396 out.go:374] Setting ErrFile to fd 2...
I1202 21:58:50.133232  508396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:58:50.133492  508396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
I1202 21:58:50.134115  508396 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:50.134806  508396 config.go:182] Loaded profile config "functional-066896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 21:58:50.135431  508396 cli_runner.go:164] Run: docker container inspect functional-066896 --format={{.State.Status}}
I1202 21:58:50.153332  508396 ssh_runner.go:195] Run: systemctl --version
I1202 21:58:50.153387  508396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-066896
I1202 21:58:50.175198  508396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/functional-066896/id_rsa Username:docker}
I1202 21:58:50.281518  508396 build_images.go:162] Building image from path: /tmp/build.2395983821.tar
I1202 21:58:50.281598  508396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 21:58:50.289032  508396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2395983821.tar
I1202 21:58:50.292843  508396 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2395983821.tar: stat -c "%s %y" /var/lib/minikube/build/build.2395983821.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2395983821.tar': No such file or directory
I1202 21:58:50.292876  508396 ssh_runner.go:362] scp /tmp/build.2395983821.tar --> /var/lib/minikube/build/build.2395983821.tar (3072 bytes)
I1202 21:58:50.310251  508396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2395983821
I1202 21:58:50.318404  508396 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2395983821 -xf /var/lib/minikube/build/build.2395983821.tar
I1202 21:58:50.328353  508396 crio.go:315] Building image: /var/lib/minikube/build/build.2395983821
I1202 21:58:50.328438  508396 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-066896 /var/lib/minikube/build/build.2395983821 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1202 21:58:53.391251  508396 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-066896 /var/lib/minikube/build/build.2395983821 --cgroup-manager=cgroupfs: (3.062784609s)
I1202 21:58:53.391316  508396 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2395983821
I1202 21:58:53.398825  508396 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2395983821.tar
I1202 21:58:53.406515  508396 build_images.go:218] Built localhost/my-image:functional-066896 from /tmp/build.2395983821.tar
I1202 21:58:53.406552  508396 build_images.go:134] succeeded building to: functional-066896
I1202 21:58:53.406558  508396 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-066896
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image rm kicbase/echo-server:functional-066896 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3281603967/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.376879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:56:53.710928  447211 retry.go:31] will retry after 311.863826ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3281603967/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "sudo umount -f /mount-9p": exit status 1 (270.490452ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-066896 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3281603967/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T" /mount1: exit status 1 (568.304112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:56:55.631043  447211 retry.go:31] will retry after 357.745512ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-066896 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-066896 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-066896 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2508289815/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-066896 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "349.734109ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.060117ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "324.487751ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "51.853187ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-066896
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-066896
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-066896
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 22:01:18.469490  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.333450  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.339804  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.351172  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.372544  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.414063  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.495599  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.657371  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:39.979137  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:40.621227  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:41.903252  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:44.464703  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:49.587076  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:01:59.828421  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:02:20.310697  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:03:01.272054  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:03:42.594781  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m58.746986125s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5: (1.254068973s)
--- PASS: TestMultiControlPlane/serial/StartCluster (180.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 kubectl -- rollout status deployment/busybox: (4.303107276s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-2sh7s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-75sjv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-rbbr2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-2sh7s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-75sjv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-rbbr2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-2sh7s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-75sjv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-rbbr2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-2sh7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-2sh7s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-75sjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-75sjv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-rbbr2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 kubectl -- exec busybox-7b57f96db7-rbbr2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node add --alsologtostderr -v 5
E1202 22:04:23.194159  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 node add --alsologtostderr -v 5: (58.045309588s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5: (1.089747366s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-204529 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.106507428s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 status --output json --alsologtostderr -v 5: (1.041655521s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp testdata/cp-test.txt ha-204529:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1384441364/001/cp-test_ha-204529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529:/home/docker/cp-test.txt ha-204529-m02:/home/docker/cp-test_ha-204529_ha-204529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test_ha-204529_ha-204529-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529:/home/docker/cp-test.txt ha-204529-m03:/home/docker/cp-test_ha-204529_ha-204529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test_ha-204529_ha-204529-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529:/home/docker/cp-test.txt ha-204529-m04:/home/docker/cp-test_ha-204529_ha-204529-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test_ha-204529_ha-204529-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp testdata/cp-test.txt ha-204529-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1384441364/001/cp-test_ha-204529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m02:/home/docker/cp-test.txt ha-204529:/home/docker/cp-test_ha-204529-m02_ha-204529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test_ha-204529-m02_ha-204529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m02:/home/docker/cp-test.txt ha-204529-m03:/home/docker/cp-test_ha-204529-m02_ha-204529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test_ha-204529-m02_ha-204529-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m02:/home/docker/cp-test.txt ha-204529-m04:/home/docker/cp-test_ha-204529-m02_ha-204529-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test_ha-204529-m02_ha-204529-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp testdata/cp-test.txt ha-204529-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1384441364/001/cp-test_ha-204529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m03:/home/docker/cp-test.txt ha-204529:/home/docker/cp-test_ha-204529-m03_ha-204529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test_ha-204529-m03_ha-204529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m03:/home/docker/cp-test.txt ha-204529-m02:/home/docker/cp-test_ha-204529-m03_ha-204529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test_ha-204529-m03_ha-204529-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m03:/home/docker/cp-test.txt ha-204529-m04:/home/docker/cp-test_ha-204529-m03_ha-204529-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test_ha-204529-m03_ha-204529-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp testdata/cp-test.txt ha-204529-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1384441364/001/cp-test_ha-204529-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m04:/home/docker/cp-test.txt ha-204529:/home/docker/cp-test_ha-204529-m04_ha-204529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529 "sudo cat /home/docker/cp-test_ha-204529-m04_ha-204529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m04:/home/docker/cp-test.txt ha-204529-m02:/home/docker/cp-test_ha-204529-m04_ha-204529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m02 "sudo cat /home/docker/cp-test_ha-204529-m04_ha-204529-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 cp ha-204529-m04:/home/docker/cp-test.txt ha-204529-m03:/home/docker/cp-test_ha-204529-m04_ha-204529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 ssh -n ha-204529-m03 "sudo cat /home/docker/cp-test_ha-204529-m04_ha-204529-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 node stop m02 --alsologtostderr -v 5: (12.043673871s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5: exit status 7 (828.516596ms)

                                                
                                                
-- stdout --
	ha-204529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-204529-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-204529-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-204529-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:05:41.883237  524612 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:05:41.883397  524612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:05:41.883407  524612 out.go:374] Setting ErrFile to fd 2...
	I1202 22:05:41.883413  524612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:05:41.883666  524612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:05:41.883855  524612 out.go:368] Setting JSON to false
	I1202 22:05:41.883882  524612 mustload.go:66] Loading cluster: ha-204529
	I1202 22:05:41.884073  524612 notify.go:221] Checking for updates...
	I1202 22:05:41.884279  524612 config.go:182] Loaded profile config "ha-204529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:05:41.884298  524612 status.go:174] checking status of ha-204529 ...
	I1202 22:05:41.884786  524612 cli_runner.go:164] Run: docker container inspect ha-204529 --format={{.State.Status}}
	I1202 22:05:41.907860  524612 status.go:371] ha-204529 host status = "Running" (err=<nil>)
	I1202 22:05:41.907885  524612 host.go:66] Checking if "ha-204529" exists ...
	I1202 22:05:41.908191  524612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-204529
	I1202 22:05:41.932663  524612 host.go:66] Checking if "ha-204529" exists ...
	I1202 22:05:41.932977  524612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:05:41.933024  524612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-204529
	I1202 22:05:41.956507  524612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/ha-204529/id_rsa Username:docker}
	I1202 22:05:42.073747  524612 ssh_runner.go:195] Run: systemctl --version
	I1202 22:05:42.082110  524612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:05:42.108430  524612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:05:42.189809  524612 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-02 22:05:42.175349409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:05:42.190653  524612 kubeconfig.go:125] found "ha-204529" server: "https://192.168.49.254:8443"
	I1202 22:05:42.190694  524612 api_server.go:166] Checking apiserver status ...
	I1202 22:05:42.190758  524612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:05:42.205102  524612 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	I1202 22:05:42.216318  524612 api_server.go:182] apiserver freezer: "7:freezer:/docker/083f72a657b8d26e3d36af533dc83efab8c9007d76c7d3f3ef56140e1d997097/crio/crio-ae3978c4eff0d4c438f77fdfbfc771561c9cc4066f1d1e7ef161f3a1ba32b029"
	I1202 22:05:42.216412  524612 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/083f72a657b8d26e3d36af533dc83efab8c9007d76c7d3f3ef56140e1d997097/crio/crio-ae3978c4eff0d4c438f77fdfbfc771561c9cc4066f1d1e7ef161f3a1ba32b029/freezer.state
	I1202 22:05:42.225493  524612 api_server.go:204] freezer state: "THAWED"
	I1202 22:05:42.225526  524612 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 22:05:42.237233  524612 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 22:05:42.237269  524612 status.go:463] ha-204529 apiserver status = Running (err=<nil>)
	I1202 22:05:42.237282  524612 status.go:176] ha-204529 status: &{Name:ha-204529 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:05:42.237310  524612 status.go:174] checking status of ha-204529-m02 ...
	I1202 22:05:42.237693  524612 cli_runner.go:164] Run: docker container inspect ha-204529-m02 --format={{.State.Status}}
	I1202 22:05:42.261619  524612 status.go:371] ha-204529-m02 host status = "Stopped" (err=<nil>)
	I1202 22:05:42.261649  524612 status.go:384] host is not running, skipping remaining checks
	I1202 22:05:42.261657  524612 status.go:176] ha-204529-m02 status: &{Name:ha-204529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:05:42.261677  524612 status.go:174] checking status of ha-204529-m03 ...
	I1202 22:05:42.262030  524612 cli_runner.go:164] Run: docker container inspect ha-204529-m03 --format={{.State.Status}}
	I1202 22:05:42.280488  524612 status.go:371] ha-204529-m03 host status = "Running" (err=<nil>)
	I1202 22:05:42.280517  524612 host.go:66] Checking if "ha-204529-m03" exists ...
	I1202 22:05:42.280865  524612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-204529-m03
	I1202 22:05:42.300681  524612 host.go:66] Checking if "ha-204529-m03" exists ...
	I1202 22:05:42.301166  524612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:05:42.301220  524612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-204529-m03
	I1202 22:05:42.321506  524612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/ha-204529-m03/id_rsa Username:docker}
	I1202 22:05:42.424660  524612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:05:42.439428  524612 kubeconfig.go:125] found "ha-204529" server: "https://192.168.49.254:8443"
	I1202 22:05:42.439462  524612 api_server.go:166] Checking apiserver status ...
	I1202 22:05:42.439505  524612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:05:42.451434  524612 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1216/cgroup
	I1202 22:05:42.461502  524612 api_server.go:182] apiserver freezer: "7:freezer:/docker/e410f5762923968c1544b0e48b2361e1dc03036e0f4f93917b618e8db0ab523c/crio/crio-e1be77e366c597d2ef3cd24762cabccaea04466af6e1426224fba819d58fa476"
	I1202 22:05:42.461588  524612 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e410f5762923968c1544b0e48b2361e1dc03036e0f4f93917b618e8db0ab523c/crio/crio-e1be77e366c597d2ef3cd24762cabccaea04466af6e1426224fba819d58fa476/freezer.state
	I1202 22:05:42.470014  524612 api_server.go:204] freezer state: "THAWED"
	I1202 22:05:42.470043  524612 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 22:05:42.478390  524612 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 22:05:42.478419  524612 status.go:463] ha-204529-m03 apiserver status = Running (err=<nil>)
	I1202 22:05:42.478428  524612 status.go:176] ha-204529-m03 status: &{Name:ha-204529-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:05:42.478444  524612 status.go:174] checking status of ha-204529-m04 ...
	I1202 22:05:42.478763  524612 cli_runner.go:164] Run: docker container inspect ha-204529-m04 --format={{.State.Status}}
	I1202 22:05:42.496566  524612 status.go:371] ha-204529-m04 host status = "Running" (err=<nil>)
	I1202 22:05:42.496595  524612 host.go:66] Checking if "ha-204529-m04" exists ...
	I1202 22:05:42.496901  524612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-204529-m04
	I1202 22:05:42.515477  524612 host.go:66] Checking if "ha-204529-m04" exists ...
	I1202 22:05:42.515790  524612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:05:42.515833  524612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-204529-m04
	I1202 22:05:42.534737  524612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/ha-204529-m04/id_rsa Username:docker}
	I1202 22:05:42.643573  524612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:05:42.657597  524612 status.go:176] ha-204529-m04 status: &{Name:ha-204529-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node start m02 --alsologtostderr -v 5
E1202 22:06:01.544848  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 node start m02 --alsologtostderr -v 5: (26.763134082s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5: (1.196983105s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.032187384s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 stop --alsologtostderr -v 5
E1202 22:06:18.470057  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:06:39.333551  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 stop --alsologtostderr -v 5: (27.52976755s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 start --wait true --alsologtostderr -v 5
E1202 22:06:45.669110  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:07:07.039127  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 start --wait true --alsologtostderr -v 5: (2m1.617101218s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node delete m03 --alsologtostderr -v 5
E1202 22:08:42.594495  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 node delete m03 --alsologtostderr -v 5: (11.049524235s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 stop --alsologtostderr -v 5: (36.038223418s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5: exit status 7 (108.14826ms)

                                                
                                                
-- stdout --
	ha-204529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-204529-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-204529-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:09:30.815496  536602 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:09:30.815673  536602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:09:30.815703  536602 out.go:374] Setting ErrFile to fd 2...
	I1202 22:09:30.815723  536602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:09:30.815997  536602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:09:30.816212  536602 out.go:368] Setting JSON to false
	I1202 22:09:30.816279  536602 mustload.go:66] Loading cluster: ha-204529
	I1202 22:09:30.816351  536602 notify.go:221] Checking for updates...
	I1202 22:09:30.817633  536602 config.go:182] Loaded profile config "ha-204529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:09:30.817685  536602 status.go:174] checking status of ha-204529 ...
	I1202 22:09:30.818454  536602 cli_runner.go:164] Run: docker container inspect ha-204529 --format={{.State.Status}}
	I1202 22:09:30.837953  536602 status.go:371] ha-204529 host status = "Stopped" (err=<nil>)
	I1202 22:09:30.837976  536602 status.go:384] host is not running, skipping remaining checks
	I1202 22:09:30.837983  536602 status.go:176] ha-204529 status: &{Name:ha-204529 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:09:30.838026  536602 status.go:174] checking status of ha-204529-m02 ...
	I1202 22:09:30.838339  536602 cli_runner.go:164] Run: docker container inspect ha-204529-m02 --format={{.State.Status}}
	I1202 22:09:30.855841  536602 status.go:371] ha-204529-m02 host status = "Stopped" (err=<nil>)
	I1202 22:09:30.855866  536602 status.go:384] host is not running, skipping remaining checks
	I1202 22:09:30.855880  536602 status.go:176] ha-204529-m02 status: &{Name:ha-204529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:09:30.855908  536602 status.go:174] checking status of ha-204529-m04 ...
	I1202 22:09:30.856264  536602 cli_runner.go:164] Run: docker container inspect ha-204529-m04 --format={{.State.Status}}
	I1202 22:09:30.875983  536602 status.go:371] ha-204529-m04 host status = "Stopped" (err=<nil>)
	I1202 22:09:30.876009  536602 status.go:384] host is not running, skipping remaining checks
	I1202 22:09:30.876015  536602 status.go:176] ha-204529-m04 status: &{Name:ha-204529-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m23.030532023s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 node add --control-plane --alsologtostderr -v 5
E1202 22:11:18.473252  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:11:39.333999  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 node add --control-plane --alsologtostderr -v 5: (1m23.7620964s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-204529 status --alsologtostderr -v 5: (1.035808264s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.096043501s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-361713 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1202 22:13:42.596229  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-361713 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.345789287s)
--- PASS: TestJSONOutput/start/Command (80.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-361713 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-361713 --output=json --user=testUser: (6.135291391s)
--- PASS: TestJSONOutput/stop/Command (6.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-180175 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-180175 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.191999ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb2a878e-419a-48db-a51d-41935db9227c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-180175] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4dc7b8e-08cd-4275-8c94-34477bc78059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"2c583785-23a3-4d9c-98e0-b90a01d85d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"25df733d-f091-4330-8907-dc81ddbeebe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig"}}
	{"specversion":"1.0","id":"68c4bff7-98fa-47ee-bc0f-34044d2f409e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube"}}
	{"specversion":"1.0","id":"5cd5fdda-885c-4aee-99bb-acfd4377f759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e378b1ef-b23c-4ee9-85e9-7851d783cf58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"afcb2fe5-ed3a-48d7-9358-1f44fdf41a20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-180175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-180175
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (60.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-158704 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-158704 --network=: (58.662604217s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-158704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-158704
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-158704: (2.224391009s)
--- PASS: TestKicCustomNetwork/create_custom_network (60.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-608363 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-608363 --network=bridge: (33.485268731s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-608363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-608363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-608363: (2.075281594s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.59s)

                                                
                                    
x
+
TestKicExistingNetwork (33.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 22:15:42.326458  447211 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 22:15:42.343609  447211 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 22:15:42.344494  447211 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 22:15:42.344535  447211 cli_runner.go:164] Run: docker network inspect existing-network
W1202 22:15:42.361151  447211 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 22:15:42.361181  447211 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 22:15:42.361198  447211 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 22:15:42.361311  447211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 22:15:42.378260  447211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-06d3c27080bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:1c:13:62:02:15} reservation:<nil>}
I1202 22:15:42.378595  447211 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001909020}
I1202 22:15:42.378622  447211 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 22:15:42.378676  447211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 22:15:42.432440  447211 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-476176 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-476176 --network=existing-network: (31.433723449s)
helpers_test.go:175: Cleaning up "existing-network-476176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-476176
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-476176: (2.130158139s)
I1202 22:16:16.013874  447211 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.71s)

                                                
                                    
x
+
TestKicCustomSubnet (36.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-939979 --subnet=192.168.60.0/24
E1202 22:16:18.470530  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:39.333863  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-939979 --subnet=192.168.60.0/24: (34.067488343s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-939979 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-939979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-939979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-939979: (2.254050339s)
--- PASS: TestKicCustomSubnet (36.35s)

                                                
                                    
x
+
TestKicStaticIP (34.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-872182 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-872182 --static-ip=192.168.200.200: (32.318428186s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-872182 ip
helpers_test.go:175: Cleaning up "static-ip-872182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-872182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-872182: (2.300981193s)
--- PASS: TestKicStaticIP (34.78s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-908114 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-908114 --driver=docker  --container-runtime=crio: (31.205713035s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-910583 --driver=docker  --container-runtime=crio
E1202 22:18:02.402804  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-910583 --driver=docker  --container-runtime=crio: (32.73827041s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-908114
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-910583
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-910583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-910583
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-910583: (2.431720481s)
helpers_test.go:175: Cleaning up "first-908114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-908114
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-908114: (2.072026939s)
--- PASS: TestMinikubeProfile (69.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-458413 --memory=3072 --mount-string /tmp/TestMountStartserial3666499954/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1202 22:18:42.595137  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-458413 --memory=3072 --mount-string /tmp/TestMountStartserial3666499954/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.899465121s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-458413 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-460350 --memory=3072 --mount-string /tmp/TestMountStartserial3666499954/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-460350 --memory=3072 --mount-string /tmp/TestMountStartserial3666499954/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.603456448s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-460350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-458413 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-458413 --alsologtostderr -v=5: (1.700665236s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-460350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-460350
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-460350: (1.300798394s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-460350
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-460350: (7.001809079s)
--- PASS: TestMountStart/serial/RestartStopped (8.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-460350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-313323 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 22:21:18.469738  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-313323 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m21.009620808s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-313323 -- rollout status deployment/busybox: (3.086830079s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-bg95c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-x8mkn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-bg95c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-x8mkn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-bg95c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-x8mkn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-bg95c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-bg95c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-x8mkn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-313323 -- exec busybox-7b57f96db7-x8mkn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-313323 -v=5 --alsologtostderr
E1202 22:21:39.333592  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-313323 -v=5 --alsologtostderr: (56.906252887s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-313323 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp testdata/cp-test.txt multinode-313323:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2110769480/001/cp-test_multinode-313323.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323:/home/docker/cp-test.txt multinode-313323-m02:/home/docker/cp-test_multinode-313323_multinode-313323-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m02 "sudo cat /home/docker/cp-test_multinode-313323_multinode-313323-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323:/home/docker/cp-test.txt multinode-313323-m03:/home/docker/cp-test_multinode-313323_multinode-313323-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m03 "sudo cat /home/docker/cp-test_multinode-313323_multinode-313323-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp testdata/cp-test.txt multinode-313323-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2110769480/001/cp-test_multinode-313323-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323-m02:/home/docker/cp-test.txt multinode-313323:/home/docker/cp-test_multinode-313323-m02_multinode-313323.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323 "sudo cat /home/docker/cp-test_multinode-313323-m02_multinode-313323.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323-m02:/home/docker/cp-test.txt multinode-313323-m03:/home/docker/cp-test_multinode-313323-m02_multinode-313323-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m03 "sudo cat /home/docker/cp-test_multinode-313323-m02_multinode-313323-m03.txt"
E1202 22:22:41.546438  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp testdata/cp-test.txt multinode-313323-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2110769480/001/cp-test_multinode-313323-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323-m03:/home/docker/cp-test.txt multinode-313323:/home/docker/cp-test_multinode-313323-m03_multinode-313323.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323 "sudo cat /home/docker/cp-test_multinode-313323-m03_multinode-313323.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 cp multinode-313323-m03:/home/docker/cp-test.txt multinode-313323-m02:/home/docker/cp-test_multinode-313323-m03_multinode-313323-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 ssh -n multinode-313323-m02 "sudo cat /home/docker/cp-test_multinode-313323-m03_multinode-313323-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-313323 node stop m03: (1.404943345s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-313323 status: exit status 7 (555.262878ms)

                                                
                                                
-- stdout --
	multinode-313323
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-313323-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-313323-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr: exit status 7 (534.782648ms)

                                                
                                                
-- stdout --
	multinode-313323
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-313323-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-313323-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:22:46.923357  586962 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:22:46.923493  586962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:22:46.923506  586962 out.go:374] Setting ErrFile to fd 2...
	I1202 22:22:46.923513  586962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:22:46.923893  586962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:22:46.924134  586962 out.go:368] Setting JSON to false
	I1202 22:22:46.924169  586962 mustload.go:66] Loading cluster: multinode-313323
	I1202 22:22:46.924888  586962 config.go:182] Loaded profile config "multinode-313323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:22:46.924907  586962 status.go:174] checking status of multinode-313323 ...
	I1202 22:22:46.925679  586962 cli_runner.go:164] Run: docker container inspect multinode-313323 --format={{.State.Status}}
	I1202 22:22:46.926684  586962 notify.go:221] Checking for updates...
	I1202 22:22:46.949159  586962 status.go:371] multinode-313323 host status = "Running" (err=<nil>)
	I1202 22:22:46.949181  586962 host.go:66] Checking if "multinode-313323" exists ...
	I1202 22:22:46.949488  586962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-313323
	I1202 22:22:46.968779  586962 host.go:66] Checking if "multinode-313323" exists ...
	I1202 22:22:46.969123  586962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:22:46.969169  586962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-313323
	I1202 22:22:46.992642  586962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/multinode-313323/id_rsa Username:docker}
	I1202 22:22:47.096388  586962 ssh_runner.go:195] Run: systemctl --version
	I1202 22:22:47.102937  586962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:22:47.115534  586962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:22:47.178859  586962 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 22:22:47.16990954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:22:47.179440  586962 kubeconfig.go:125] found "multinode-313323" server: "https://192.168.67.2:8443"
	I1202 22:22:47.179479  586962 api_server.go:166] Checking apiserver status ...
	I1202 22:22:47.179534  586962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:22:47.191146  586962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1231/cgroup
	I1202 22:22:47.199342  586962 api_server.go:182] apiserver freezer: "7:freezer:/docker/9071098e495818df71a5c8b749fa99365967eb512bb76de9a62b4450b1334ffa/crio/crio-6336e0c67599913b1bc10f43f13b84ae31ae9882fa80beeb3db08f87f76e641e"
	I1202 22:22:47.199424  586962 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9071098e495818df71a5c8b749fa99365967eb512bb76de9a62b4450b1334ffa/crio/crio-6336e0c67599913b1bc10f43f13b84ae31ae9882fa80beeb3db08f87f76e641e/freezer.state
	I1202 22:22:47.207392  586962 api_server.go:204] freezer state: "THAWED"
	I1202 22:22:47.207419  586962 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 22:22:47.215587  586962 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 22:22:47.215615  586962 status.go:463] multinode-313323 apiserver status = Running (err=<nil>)
	I1202 22:22:47.215626  586962 status.go:176] multinode-313323 status: &{Name:multinode-313323 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:22:47.215643  586962 status.go:174] checking status of multinode-313323-m02 ...
	I1202 22:22:47.215956  586962 cli_runner.go:164] Run: docker container inspect multinode-313323-m02 --format={{.State.Status}}
	I1202 22:22:47.232709  586962 status.go:371] multinode-313323-m02 host status = "Running" (err=<nil>)
	I1202 22:22:47.232743  586962 host.go:66] Checking if "multinode-313323-m02" exists ...
	I1202 22:22:47.233047  586962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-313323-m02
	I1202 22:22:47.251922  586962 host.go:66] Checking if "multinode-313323-m02" exists ...
	I1202 22:22:47.252345  586962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:22:47.252397  586962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-313323-m02
	I1202 22:22:47.270080  586962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/21997-444114/.minikube/machines/multinode-313323-m02/id_rsa Username:docker}
	I1202 22:22:47.372316  586962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:22:47.384954  586962 status.go:176] multinode-313323-m02 status: &{Name:multinode-313323-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:22:47.384987  586962 status.go:174] checking status of multinode-313323-m03 ...
	I1202 22:22:47.385302  586962 cli_runner.go:164] Run: docker container inspect multinode-313323-m03 --format={{.State.Status}}
	I1202 22:22:47.403035  586962 status.go:371] multinode-313323-m03 host status = "Stopped" (err=<nil>)
	I1202 22:22:47.403062  586962 status.go:384] host is not running, skipping remaining checks
	I1202 22:22:47.403069  586962 status.go:176] multinode-313323-m03 status: &{Name:multinode-313323-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-313323 node start m03 -v=5 --alsologtostderr: (7.449287144s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-313323
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-313323
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-313323: (25.003142788s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-313323 --wait=true -v=5 --alsologtostderr
E1202 22:23:25.670873  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:23:42.594725  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-313323 --wait=true -v=5 --alsologtostderr: (54.868740494s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-313323
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-313323 node delete m03: (4.923383047s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-313323 stop: (23.802076176s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-313323 status: exit status 7 (127.039641ms)

                                                
                                                
-- stdout --
	multinode-313323
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-313323-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr: exit status 7 (101.621452ms)

                                                
                                                
-- stdout --
	multinode-313323
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-313323-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:24:45.296766  594798 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:24:45.296934  594798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:24:45.296943  594798 out.go:374] Setting ErrFile to fd 2...
	I1202 22:24:45.296949  594798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:24:45.297193  594798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:24:45.297378  594798 out.go:368] Setting JSON to false
	I1202 22:24:45.297410  594798 mustload.go:66] Loading cluster: multinode-313323
	I1202 22:24:45.297461  594798 notify.go:221] Checking for updates...
	I1202 22:24:45.297856  594798 config.go:182] Loaded profile config "multinode-313323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:24:45.297874  594798 status.go:174] checking status of multinode-313323 ...
	I1202 22:24:45.298679  594798 cli_runner.go:164] Run: docker container inspect multinode-313323 --format={{.State.Status}}
	I1202 22:24:45.323180  594798 status.go:371] multinode-313323 host status = "Stopped" (err=<nil>)
	I1202 22:24:45.323200  594798 status.go:384] host is not running, skipping remaining checks
	I1202 22:24:45.323215  594798 status.go:176] multinode-313323 status: &{Name:multinode-313323 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 22:24:45.323241  594798 status.go:174] checking status of multinode-313323-m02 ...
	I1202 22:24:45.323547  594798 cli_runner.go:164] Run: docker container inspect multinode-313323-m02 --format={{.State.Status}}
	I1202 22:24:45.345396  594798 status.go:371] multinode-313323-m02 host status = "Stopped" (err=<nil>)
	I1202 22:24:45.345416  594798 status.go:384] host is not running, skipping remaining checks
	I1202 22:24:45.345433  594798 status.go:176] multinode-313323-m02 status: &{Name:multinode-313323-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-313323 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-313323 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.336682516s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-313323 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-313323
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-313323-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-313323-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.622489ms)

                                                
                                                
-- stdout --
	* [multinode-313323-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-313323-m02' is duplicated with machine name 'multinode-313323-m02' in profile 'multinode-313323'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-313323-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-313323-m03 --driver=docker  --container-runtime=crio: (34.369376999s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-313323
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-313323: exit status 80 (376.256877ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-313323 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-313323-m03 already exists in multinode-313323-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-313323-m03
E1202 22:26:18.469310  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-313323-m03: (2.071183505s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.97s)

                                                
                                    
x
+
TestPreload (117.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-376221 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1202 22:26:39.334213  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-376221 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (59.450355239s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-376221 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-376221 image pull gcr.io/k8s-minikube/busybox: (2.292170431s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-376221
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-376221: (5.862943775s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-376221 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-376221 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.985980067s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-376221 image list
helpers_test.go:175: Cleaning up "test-preload-376221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-376221
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-376221: (2.439233595s)
--- PASS: TestPreload (117.28s)

                                                
                                    
x
+
TestScheduledStopUnix (107.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-977495 --memory=3072 --driver=docker  --container-runtime=crio
E1202 22:28:42.595630  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-977495 --memory=3072 --driver=docker  --container-runtime=crio: (31.159028976s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-977495 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 22:28:53.169853  609312 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:53.170044  609312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:53.170058  609312 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:53.170063  609312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:53.170455  609312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:28:53.170805  609312 out.go:368] Setting JSON to false
	I1202 22:28:53.170964  609312 mustload.go:66] Loading cluster: scheduled-stop-977495
	I1202 22:28:53.172109  609312 config.go:182] Loaded profile config "scheduled-stop-977495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:28:53.172224  609312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/config.json ...
	I1202 22:28:53.172470  609312 mustload.go:66] Loading cluster: scheduled-stop-977495
	I1202 22:28:53.172651  609312 config.go:182] Loaded profile config "scheduled-stop-977495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-977495 -n scheduled-stop-977495
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-977495 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 22:28:53.631672  609404 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:53.631797  609404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:53.631806  609404 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:53.631812  609404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:53.632127  609404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:28:53.632386  609404 out.go:368] Setting JSON to false
	I1202 22:28:53.633405  609404 daemonize_unix.go:73] killing process 609327 as it is an old scheduled stop
	I1202 22:28:53.633594  609404 mustload.go:66] Loading cluster: scheduled-stop-977495
	I1202 22:28:53.634468  609404 config.go:182] Loaded profile config "scheduled-stop-977495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:28:53.634556  609404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/config.json ...
	I1202 22:28:53.634734  609404 mustload.go:66] Loading cluster: scheduled-stop-977495
	I1202 22:28:53.634868  609404 config.go:182] Loaded profile config "scheduled-stop-977495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 22:28:53.649126  447211 retry.go:31] will retry after 92.8µs: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.649327  447211 retry.go:31] will retry after 157.724µs: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.649832  447211 retry.go:31] will retry after 311.246µs: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.650418  447211 retry.go:31] will retry after 413.113µs: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.651506  447211 retry.go:31] will retry after 321.861µs: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.652600  447211 retry.go:31] will retry after 577.408µs: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.653729  447211 retry.go:31] will retry after 1.126173ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.655919  447211 retry.go:31] will retry after 2.238191ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.659165  447211 retry.go:31] will retry after 3.322884ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.663404  447211 retry.go:31] will retry after 2.609847ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.666626  447211 retry.go:31] will retry after 4.342799ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.671843  447211 retry.go:31] will retry after 4.75754ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.677066  447211 retry.go:31] will retry after 9.577426ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.687306  447211 retry.go:31] will retry after 10.504714ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.698789  447211 retry.go:31] will retry after 20.242232ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
I1202 22:28:53.719155  447211 retry.go:31] will retry after 50.168211ms: open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-977495 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-977495 -n scheduled-stop-977495
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-977495
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-977495 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 22:29:19.568382  609835 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:29:19.568569  609835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:29:19.568603  609835 out.go:374] Setting ErrFile to fd 2...
	I1202 22:29:19.568632  609835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:29:19.568964  609835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-444114/.minikube/bin
	I1202 22:29:19.569284  609835 out.go:368] Setting JSON to false
	I1202 22:29:19.569450  609835 mustload.go:66] Loading cluster: scheduled-stop-977495
	I1202 22:29:19.569856  609835 config.go:182] Loaded profile config "scheduled-stop-977495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 22:29:19.569950  609835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/scheduled-stop-977495/config.json ...
	I1202 22:29:19.570198  609835 mustload.go:66] Loading cluster: scheduled-stop-977495
	I1202 22:29:19.570361  609835 config.go:182] Loaded profile config "scheduled-stop-977495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-977495
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-977495: exit status 7 (72.726982ms)

                                                
                                                
-- stdout --
	scheduled-stop-977495
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-977495 -n scheduled-stop-977495
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-977495 -n scheduled-stop-977495: exit status 7 (71.11834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-977495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-977495
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-977495: (5.021590391s)
--- PASS: TestScheduledStopUnix (107.78s)

                                                
                                    
x
+
TestInsufficientStorage (12.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-398349 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-398349 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.014394261s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3f03c3a-1328-47a7-b56a-9ac1bac541ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-398349] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74116afc-f3b5-4387-b363-a705b9780e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"12e4c73d-b479-44d8-a678-860273f0e1f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"575f22e7-8c41-4cd2-a344-310bdbcbfae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig"}}
	{"specversion":"1.0","id":"576b39db-648c-4247-9b0e-6776d3c6571f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube"}}
	{"specversion":"1.0","id":"75a06b44-5581-4030-abc0-466bb0abcc99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c01a7c53-9c9c-4922-b186-79153d47b1f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2220c9cc-fe20-4b48-b0b7-70b194d6be51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e104f6b5-72ed-4b8e-90b5-2ab8f7c5cd07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4e126c25-9e88-42a1-9c9e-c339bc80fdda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"def7bae1-71e6-423d-8dcd-a3aa1563bc08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"279e2346-4cb6-45c0-806f-f5ae3029a67b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-398349\" primary control-plane node in \"insufficient-storage-398349\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bb3b47a-b9de-4ae7-98d7-4d0aae663253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"29350581-e99f-408f-b7c7-c9753bae54f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe54e20c-14a4-4dd7-aa64-3a53582403c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-398349 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-398349 --output=json --layout=cluster: exit status 7 (324.334697ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-398349","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-398349","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:30:20.066115  611722 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-398349" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-398349 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-398349 --output=json --layout=cluster: exit status 7 (294.815099ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-398349","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-398349","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:30:20.362629  611788 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-398349" does not appear in /home/jenkins/minikube-integration/21997-444114/kubeconfig
	E1202 22:30:20.373643  611788 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/insufficient-storage-398349/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-398349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-398349
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-398349: (1.978045624s)
--- PASS: TestInsufficientStorage (12.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (300.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3641793158 start -p running-upgrade-873899 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3641793158 start -p running-upgrade-873899 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.543924649s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-873899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1202 22:38:42.595165  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:39:21.549570  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:05.672270  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:41:18.469903  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:41:39.333872  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-873899 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.114709846s)
helpers_test.go:175: Cleaning up "running-upgrade-873899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-873899
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-873899: (1.991476179s)
--- PASS: TestRunningBinaryUpgrade (300.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.730011834 start -p missing-upgrade-825984 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.730011834 start -p missing-upgrade-825984 --memory=3072 --driver=docker  --container-runtime=crio: (1m16.687176083s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-825984
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-825984
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-825984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-825984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.093501126s)
helpers_test.go:175: Cleaning up "missing-upgrade-825984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-825984
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-825984: (2.305078322s)
--- PASS: TestMissingContainerUpgrade (141.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245878 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-245878 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (103.059796ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-245878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-444114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-444114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.954891241s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-245878 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.346573822s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-245878 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-245878 status -o json: exit status 2 (318.470216ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-245878","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-245878
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-245878: (2.033574709s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1202 22:31:18.470085  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.542422488s)
--- PASS: TestNoKubernetes/serial/Start (9.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-444114/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-245878 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-245878 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.103696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-245878
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-245878: (1.314961858s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245878 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245878 --driver=docker  --container-runtime=crio: (8.386746838s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-245878 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-245878 "sudo systemctl is-active --quiet service kubelet": exit status 1 (397.875693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (11.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (11.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (298.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1037207830 start -p stopped-upgrade-013069 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1037207830 start -p stopped-upgrade-013069 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.98196922s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1037207830 -p stopped-upgrade-013069 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1037207830 -p stopped-upgrade-013069 stop: (1.240456909s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-013069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1202 22:33:42.594641  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:34:42.405006  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:36:18.469508  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/addons-656754/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:36:39.333894  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-066896/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-013069 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.832531112s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (298.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-013069
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-013069: (1.603752528s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.60s)

                                                
                                    
x
+
TestPause/serial/Start (84.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-618835 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1202 22:43:42.596084  447211 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-444114/.minikube/profiles/functional-218190/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-618835 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.193061732s)
--- PASS: TestPause/serial/Start (84.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-618835 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-618835 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.178186626s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.20s)

                                                
                                    

Test skip (35/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.15
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.55
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 21:08:51.047948  447211 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1202 21:08:51.152763  447211 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
W1202 21:08:51.199308  447211 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-798204 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-798204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-798204
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard